Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Benchmarks workflow #1458

Merged
merged 11 commits into from
Aug 24, 2024
175 changes: 175 additions & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
# This workflow is the main workflow for regenerating the benchmarks data needed for Bowtie's UI.
# It runs all benchmarks over Bowtie's supported implementations, publishing the benchmark reports for use in the frontend.
name: Collect New Benchmark Results

on:
workflow_dispatch:
schedule:
# Every Monday at 08:00 UTC
- cron: "0 8 * * 1"
sudo-jarvis marked this conversation as resolved.
Show resolved Hide resolved

permissions:
contents: read
pages: write
id-token: write

jobs:
dialects:
runs-on: ubuntu-latest
outputs:
dialects: ${{ steps.dialects-matrix.outputs.dialects }}
steps:
- uses: actions/checkout@v4
- name: Collect supported dialects
id: dialects-matrix
run: |
printf 'dialects=%s\n' "$(jq -c '[.[].shortName]' data/dialects.json)" >> $GITHUB_OUTPUT
sudo-jarvis marked this conversation as resolved.
Show resolved Hide resolved

benchmark_files:
needs: dialects
runs-on: ubuntu-latest
outputs:
dialect_benchmarks: ${{ steps.benchmarks.outputs.dialect_benchmarks }}
steps:
- uses: actions/checkout@v4

- name: Install Bowtie
uses: ./

- name: Collect Benchmark Files
id: benchmarks
run: |
results=()
dialects='${{ needs.dialects.outputs.dialects }}'
dialects=$(echo $dialects | jq -r '.[]')
results=()

for dialect in $dialects; do
output=$(bowtie filter-benchmarks -D "$dialect")

if [ -n "$output" ]; then
while IFS= read -r line; do
json_result=$(jq -nc --arg p "$dialect" --arg o "$line" '{ dialect: $p, benchmark: $o }')
results+=("$json_result")
done <<< "$output"
fi
done
final_json="$(jq -sc '.' <<< "${results[@]}")"
final_json=$(echo "$final_json" | jq -c '{ "include": . }')
echo $final_json >> $GITHUB_STEP_SUMMARY
echo "dialect_benchmarks=$final_json" >> $GITHUB_OUTPUT

run_benchmarks:
needs: benchmark_files
runs-on: ubuntu-latest
timeout-minutes: 720
sudo-jarvis marked this conversation as resolved.
Show resolved Hide resolved
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.benchmark_files.outputs.dialect_benchmarks) }}
steps:
- uses: actions/checkout@v4

- name: Install Bowtie
uses: ./

- name: Install pyperf dependency
run: |
python -m pip install pyperf

- name: Generate Benchmark Report
run: |
bowtie perf $(bowtie filter-implementations | sed 's/^/-i /') -b ${{ matrix.benchmark }} -D ${{ matrix.dialect }} -q > ${{ matrix.benchmark }}-file.json

- name: Store Unique Id for distinction
run: echo "unique_id=$(uuidgen)" >> $GITHUB_ENV
sudo-jarvis marked this conversation as resolved.
Show resolved Hide resolved

- name: Upload Benchmark file as artifact
uses: actions/upload-artifact@v4
with:
name: benchmark-file-${{ matrix.dialect }}-${{ env.unique_id }}
path: ${{ matrix.benchmark }}-file.json

merge_benchmarks_into_single_report:
needs:
- dialects
- run_benchmarks

runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
dialect: ${{ fromJson(needs.dialects.outputs.dialects) }}

steps:
- uses: actions/checkout@v4

- name: Download Benchmark Reports for a dialect
uses: actions/download-artifact@v4
with:
pattern: benchmark-file-${{ matrix.dialect }}-*
path: benchmarks-${{ matrix.dialect }}/
merge-multiple: true

- name: Merge Benchmark Reports for a dialect
run: |
python - <<EOF
import os
sudo-jarvis marked this conversation as resolved.
Show resolved Hide resolved
import json

directory_path = f'benchmarks-${{ matrix.dialect }}'
output_file = f'${{ matrix.dialect }}.json'
metadata = None
combined_results = []

for filename in sorted(os.listdir(directory_path)):
file_path = os.path.join(directory_path, filename)
with open(file_path, 'r') as file:
data = json.load(file)
if metadata is None:
metadata = data.get('metadata', {})
results = data.get('results', [])
combined_results.extend(results)

combined_json = {
'metadata': metadata,
'results': combined_results
}

with open(output_file, 'w') as file:
json.dump(combined_json, file, indent=4)
EOF

- name: Upload final Benchmark Report for dialect
uses: actions/upload-artifact@v4
with:
name: benchmark-report-${{ matrix.dialect }}
path: ${{ matrix.dialect }}.json

upload_benchmark_artifact:
needs:
- merge_benchmarks_into_single_report

runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Create Benchmarks folder
run: mkdir benchmarks

- name: Include New Benchmark Reports
uses: actions/download-artifact@v4
with:
pattern: benchmark-report-*
path: benchmarks/
merge-multiple: true

- uses: actions/upload-artifact@v4
with:
name: benchmarks
path: benchmarks

regenerate-reports:
needs: upload_benchmark_artifact
uses: ./.github/workflows/report.yml
with:
report_benchmark_artifact_in_scope: true
20 changes: 20 additions & 0 deletions .github/workflows/report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ on:
type: string
required: false
default: ""
report_benchmark_artifact_in_scope:
required: false
type: boolean
workflow_dispatch:
schedule:
# Every 6 hours, at 15 past the hour
Expand Down Expand Up @@ -160,6 +163,23 @@ jobs:
path: site/
merge-multiple: true

- name: Include Benchmark Report from local artifact
uses: actions/download-artifact@v4
if: inputs.report_benchmark_artifact_in_scope == true
with:
name: benchmarks
path: site/benchmarks

# if called as a separate workflow
- name: Download latest Benchmark Report
if: inputs.report_benchmark_artifact_in_scope == false
uses: dawidd6/action-download-artifact@v6
with:
workflow: benchmark.yml
branch: main
name: benchmarks
path: site/benchmarks

- name: Include our UI data
uses: actions/download-artifact@v4
with:
Expand Down
Loading