Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update format.yml | Do push instead of PR after formatting #608

Merged
merged 11 commits into from
Nov 25, 2024
15 changes: 8 additions & 7 deletions .github/workflows/format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,15 @@ jobs:
git config --global user.email "[email protected]"
# Commit changes
git commit -m '[Automated Commit] Format Codebase'
git push

# Push changes to a new branch
BRANCH_NAME="auto/code-format"
git branch $BRANCH_NAME
git push origin $BRANCH_NAME --force
#BRANCH_NAME="auto/code-format"
#git branch $BRANCH_NAME
#git push origin $BRANCH_NAME --force

# Create a pull request to the "code-format" branch
gh pr create --base code-format --head $BRANCH_NAME --title "[Automated PR] Format Codebase" --body "This pull request contains automated code formatting changes."
fi
env:
GH_TOKEN: ${{ secrets.ACCESS_TOKEN }}
#gh pr create --base code-format --head $BRANCH_NAME --title "[Automated PR] Format Codebase" --body "This pull request contains automated code formatting changes."
fi
# env:
# GH_TOKEN: ${{ secrets.ACCESS_TOKEN }}
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: MLPerf Inference Nvidia implementations

on:
schedule:
- cron: "55 01 * * *" #to be adjusted
- cron: "15 02 * * *" #to be adjusted

jobs:
run_nvidia:
Expand Down
2 changes: 1 addition & 1 deletion git_commit_hash.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
4ac9f687880e17058f0b48fd95731d7090ea7db4
f57fe4f69aa2d88c0c0ceb925a4a61723917c8e8
2 changes: 1 addition & 1 deletion script/draw-graph-from-json-data/process-cm-deps.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def main():
G = generate_graph_from_nested_json(
json_data, output_image=args.output_image)

generate_mermaid_output(json_data, mermaid_file="graph.mmd")
generate_mermaid_output(json_data, mermaid_file=args.output_mermaid)

# Export the graph data
export_graph_data(G, filename=args.output_graphml)
Expand Down
63 changes: 63 additions & 0 deletions script/get-ml-model-rgat/_cm.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
alias: get-ml-model-rgat
automation_alias: script
automation_uid: 5b4e0237da074764
cache: true
category: AI/ML models
env:
CM_ML_MODEL: RGAT
CM_ML_MODEL_DATASET: ICBH
input_mapping:
checkpoint: RGAT_CHECKPOINT_PATH
download_path: CM_DOWNLOAD_PATH
to: CM_DOWNLOAD_PATH
new_env_keys:
- CM_ML_MODEL_*
- RGAT_CHECKPOINT_PATH
prehook_deps:
- enable_if_env:
CM_DOWNLOAD_TOOL:
- rclone
CM_TMP_REQUIRE_DOWNLOAD:
- 'yes'
env:
CM_DOWNLOAD_FINAL_ENV_NAME: CM_ML_MODEL_PATH
extra_cache_tags: rgat,gnn,model
force_cache: true
names:
- dae
tags: download-and-extract
update_tags_from_env_with_prefix:
_url.:
- CM_DOWNLOAD_URL
print_env_at_the_end:
RGAT_CHECKPOINT_PATH: R-GAT checkpoint path
tags:
- get
- raw
- ml-model
- rgat
uid: b409fd66c5ad4ed5
variations:
fp32:
default: true
env:
CM_ML_MODEL_INPUT_DATA_TYPES: fp32
CM_ML_MODEL_PRECISION: fp32
CM_ML_MODEL_WEIGHT_DATA_TYPES: fp32
group: precision
mlcommons:
default: true
default_variations:
download-tool: rclone
group: download-source
rclone:
adr:
dae:
tags: _rclone
env:
CM_DOWNLOAD_TOOL: rclone
CM_RCLONE_CONFIG_NAME: mlc-inference
group: download-tool
rclone,fp32:
env:
CM_DOWNLOAD_URL: mlc-inference:mlcommons-inference-wg-public/R-GAT/RGAT.pt
29 changes: 29 additions & 0 deletions script/get-ml-model-rgat/customize.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from cmind import utils
import os


def preprocess(i):

os_info = i['os_info']
env = i['env']

path = env.get('RGAT_CHECKPOINT_PATH', '').strip()

if path == '' or not os.path.exists(path):
env['CM_TMP_REQUIRE_DOWNLOAD'] = 'yes'

return {'return': 0}


def postprocess(i):

env = i['env']

if env.get('RGAT_CHECKPOINT_PATH', '') == '':
env['RGAT_CHECKPOINT_PATH'] = env['CM_ML_MODEL_PATH']
elif env.get('CM_ML_MODEL_PATH', '') == '':
env['CM_ML_MODEL_PATH'] = env['RGAT_CHECKPOINT_PATH']

env['CM_GET_DEPENDENT_CACHED_PATH'] = env['RGAT_CHECKPOINT_PATH']

return {'return': 0}
Loading