Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding benchmark workflows #923

Merged
merged 2 commits into from
Apr 30, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions .github/actions/run-benchmarks/action.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
name: 'Run benchmark'
inputs:
framework:
description: 'The runtime version to use (e.g. net5.0)'
required: false
default: 'net5.0'
runtimes:
description: 'The runtime version to use (e.g. netcoreapp31, net5.0)'
required: false
default: 'net5.0'
output-folder:
description: 'The output folder for the benchmark (a results folder is created inside)'
required: false
default: 'Artifacts/Benchmark'
exporters:
description: 'The exporter(s) used for this run (GitHub/StackOverflow/RPlot/CSV/JSON/HTML/XML)'
required: false
default: fulljson rplot
filter:
description: 'An optional class filter to apply'
required: false
default: '*'
categories:
description: 'An optional categories filter to apply'
required: false
default: ''
execution-options:
description: 'Any additional parameters passed to the benchmark'
required: false
default: ''
runs:
using: "composite"
steps:
- name: Generating benchmark results
run: >
dotnet run --project UnitsNet.Benchmark -c Release
--framework ${{ inputs.framework }}
--runtimes ${{ inputs.runtimes }}
--artifacts '${{ inputs.output-folder }}'
--exporters ${{ inputs.exporters }}
--filter '${{ inputs.filter }}'
${{ inputs.categories }} ${{ inputs.execution-options }}
shell: bash
101 changes: 101 additions & 0 deletions .github/workflows/continious-benchmarking.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
name: UnitsNet Benchmarks (auto)
on:
push:
branches: [master]
paths:
- "UnitsNet/*"
- "UnitsNet.Benchmark/*"
- ".github/workflows/**"
- ".github/actions/**"

env:
FRAMEWORK: net5.0
EXECUTION_OPTIONS: --iterationTime 500 --disableLogFile # see https://benchmarkdotnet.org/articles/guides/console-args.html
BENCHMARK_PAGES_BRANCH: gh-pages
BENCHMARK_DATA_FOLDER: benchmarks

jobs:
benchmark:
runs-on: windows-latest # required by the older frameworks
strategy:
# max-parallel: 1 # is it better to avoid running in parallel?
matrix:
runtime: ["netcoreapp50", "netcoreapp21", "net472"]
steps:
- run: echo Starting benchmarks for ${{ matrix.runtime }}

# checkout the current branch
- uses: actions/checkout@v2

# we need all frameworks (even if only running one target at a time)
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '2.1.x'

- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'

- uses: actions/setup-dotnet@v1
with:
dotnet-version: '5.0.x'

# executing the benchmark for the current framework, placing the result in a corresponding sub-folder
- uses: ./.github/actions/run-benchmarks
with:
framework: ${{ env.FRAMEWORK }}
runtimes: ${{ matrix.runtime }}
output-folder: Artifacts/Benchmark/${{ matrix.runtime }}
execution-options: ${{ env.EXECUTION_OPTIONS }}

# saving the current artifact (downloadable until the expiration date of this action)
- name: Store benchmark artifact
uses: actions/upload-artifact@v2
with:
name: UnitsNet Benchmarks (${{ matrix.runtime }})
path: Artifacts/Benchmark/${{ matrix.runtime }}/results/

publish-results:
needs: [benchmark]
runs-on: ubuntu-latest
strategy:
max-parallel: 1 # cannot commit on the same branch in parallel
matrix:
runtime: ["netcoreapp50", "netcoreapp21", "net472"]
steps:
- name: Initializing git folder ️
uses: actions/[email protected]

- name: Download Artifacts # The benchmark results are downloaded into a 'runtime' folder.
uses: actions/download-artifact@v1
with:
name: UnitsNet Benchmarks (${{ matrix.runtime }})
path: ${{ matrix.runtime }}

# publishing the current results to the benchmark-pages branch (overriding the previous result)
- name: Saving benchmark results to ${{ env.BENCHMARK_PAGES_BRANCH }}/${{ env.BENCHMARK_DATA_FOLDER }}/${{ matrix.runtime }}/results
uses: JamesIves/[email protected]
with:
folder: ${{ matrix.runtime }}
branch: ${{ env.BENCHMARK_PAGES_BRANCH }}
target-folder: ${{ env.BENCHMARK_DATA_FOLDER }}/${{ matrix.runtime }}/results
commit-message: Automatic benchmark generation for ${{ github.sha }}

# appending to the running benchmark data on the benchmark-pages branch
- name: Updating benchmark charts
uses: starburst997/[email protected]
with:
name: UnitsNet Benchmarks (${{ matrix.runtime }})
tool: 'benchmarkdotnet'
output-file-path: ${{ matrix.runtime }}/UnitsNet.Benchmark.UnitsNetBenchmarks-report-full.json
gh-pages-branch: ${{ env.BENCHMARK_PAGES_BRANCH }}
benchmark-data-dir-path: ${{ env.BENCHMARK_DATA_FOLDER }}/${{ matrix.runtime }}
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
# Show alert with commit comment on detecting possible performance regression
alert-threshold: '200%'
comment-always: true
comment-on-alert: true
fail-on-alert: false
alert-comment-cc-users: '@lipchev'

142 changes: 142 additions & 0 deletions .github/workflows/run-benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
name: Run UnitsNet Benchmarks
on:
workflow_dispatch:
inputs:
benchmark-name:
description: 'A name given for this benchmark'
required: false
default: 'UnitsNet Benchmarks'
runtimes:
description: 'The runtime version to use (e.g. net472 net48 netcoreapp21 netcoreapp31 netcoreapp50)'
default: net472 netcoreapp21 netcoreapp50
required: true
exporters:
description: 'The exporter(s) used for this run (GitHub/StackOverflow/RPlot/CSV/JSON/HTML/XML)'
required: false
default: fulljson rplot
filter:
description: 'An optional class filter to apply'
required: false
default: '*'
categories:
description: 'An optional categories filter to apply'
required: false
default: ''
execution-options:
description: 'Any additional parameters passed to the benchmark'
required: false
default: --disableLogFile
comparison-baseline:
description: 'Compare against a previous result (expecting a link to *-report-full.json)'
required: true
default: 'https://angularsen.github.io/UnitsNet/benchmarks/netcoreapp50/results/UnitsNet.Benchmark.UnitsNetBenchmarks-report-full.json'
comparison-threshold:
description: 'The (comparison) threshold for Statistical Test. Examples: 5%, 10ms, 100ns, 1s'
required: false
default: '1%'
comparison-top:
description: 'Filter the comparison to the top/bottom N results'
required: false
default: 10
framework:
description: 'The dotnet-version version to use (e.g. net5.0)'
default: 'net5.0'
required: true
jobs:
benchmark:
env:
OUTPUT_FOLDER: Artifacts/Benchmarks/${{ github.event.inputs.benchmark-name }}
runs-on: windows-latest
steps:
# checkout the current branch
- uses: actions/checkout@v2

# we need all frameworks (even if only running one target at a time)
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '2.1.x'

- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'

- uses: actions/setup-dotnet@v1
with:
dotnet-version: '5.0.x'

# executing the benchmark for the current framework(s), placing the result in the output-folder
- uses: ./.github/actions/run-benchmarks
with:
framework: ${{ github.event.inputs.framework }}
runtimes: ${{ github.event.inputs.runtimes }}
output-folder: ${{ env.OUTPUT_FOLDER }}
filter: ${{ github.event.inputs.filter }}
categories: ${{ github.event.inputs.categories }}
execution-options: ${{ github.event.inputs.execution-options }}

# saving the current artifact (downloadable until the expiration date of this action)
- name: Store benchmark result
uses: actions/upload-artifact@v2
with:
name: ${{ github.event.inputs.benchmark-name }} (${{ github.event.inputs.runtimes }})
path: ${{ env.OUTPUT_FOLDER }}/results
if-no-files-found: error

compare-results:
if: ${{ endsWith(github.event.inputs.comparison-baseline, '-report-full.json') }}
env:
OUPUT_NAME: Baseline vs ${{ github.event.inputs.benchmark-name }} (${{ github.event.inputs.runtimes }})
needs: [benchmark]
runs-on: ubuntu-latest
steps:
# The baseline results are downloaded into a 'baseline' folder.
- name: Download Baseline
uses: carlosperate/[email protected]
with:
file-url: ${{ github.event.inputs.comparison-baseline }}
location: baseline

# The benchmark results are downloaded into a 'runtime' folder.
- name: Download Artifacts
uses: actions/download-artifact@v1
with:
name: ${{ github.event.inputs.benchmark-name }} (${{ github.event.inputs.runtimes }})
path: results

# The benchmark comparer is currently taken from dotnet/performance/src/tools/ResultsComparer (hoping that a 'tool' would eventually be made available)
- name: Download ResultsComparer
uses: actions/checkout@v2
with:
repository: dotnet/performance
path: comparer

- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'

- run: mkdir -p artifacts

# Executing the comparer, placing the result in a 'comparison' folder (as well as creating a summary from the output)
- name: Running the ResultsComparer
env:
PERFLAB_TARGET_FRAMEWORKS: netcoreapp3.1
run: >
dotnet run --project 'comparer/src/tools/ResultsComparer' -c Release
--framework netcoreapp31
--base baseline --diff results
--csv "artifacts/${{ env.OUPUT_NAME }}.csv"
--xml "artifacts/${{ env.OUPUT_NAME }}.xml"
--threshold ${{ github.event.inputs.comparison-threshold }}
--top ${{ github.event.inputs.comparison-top }}
> "artifacts/${{ env.OUPUT_NAME }}.md"

- name: Summary
run: cat "artifacts/${{ env.OUPUT_NAME }}.md"

# saving the current artifacts (downloadable until the expiration date of this action)
- name: Store comparison result
uses: actions/upload-artifact@v2
with:
name: ${{ env.OUPUT_NAME }}
path: artifacts

21 changes: 18 additions & 3 deletions UnitsNet.Benchmark/Program.cs
Original file line number Diff line number Diff line change
Expand Up @@ -11,62 +11,77 @@ public class UnitsNetBenchmarks
private IQuantity lengthIQuantity = Length.FromMeters(3.0);

[Benchmark]
[BenchmarkCategory("Construction")]
public Length Constructor() => new Length(3.0, LengthUnit.Meter);

[Benchmark]
[BenchmarkCategory("Construction")]
public Length Constructor_SI() => new Length(3.0, UnitSystem.SI);

[Benchmark]
[BenchmarkCategory("Construction")]
public Length FromMethod() => Length.FromMeters(3.0);

[Benchmark]
[BenchmarkCategory("Transformation")]
public double ToProperty() => length.Centimeters;

[Benchmark]
[BenchmarkCategory("Transformation, Value")]
public double As() => length.As(LengthUnit.Centimeter);

[Benchmark]
[BenchmarkCategory("Transformation, Value")]
public double As_SI() => length.As(UnitSystem.SI);

[Benchmark]
[BenchmarkCategory("Transformation, Quantity")]
public Length ToUnit() => length.ToUnit(LengthUnit.Centimeter);

[Benchmark]
[BenchmarkCategory("Transformation, Quantity")]
public Length ToUnit_SI() => length.ToUnit(UnitSystem.SI);

[Benchmark]
[BenchmarkCategory("ToString")]
public string ToStringTest() => length.ToString();

[Benchmark]
[BenchmarkCategory("Parsing")]
public Length Parse() => Length.Parse("3.0 m");

[Benchmark]
[BenchmarkCategory("Parsing")]
public bool TryParseValid() => Length.TryParse("3.0 m", out var l);

[Benchmark]
[BenchmarkCategory("Parsing")]
public bool TryParseInvalid() => Length.TryParse("3.0 zoom", out var l);

[Benchmark]
[BenchmarkCategory("Construction")]
public IQuantity QuantityFrom() => Quantity.From(3.0, LengthUnit.Meter);

[Benchmark]
[BenchmarkCategory("Transformation, Value")]
public double IQuantity_As() => lengthIQuantity.As(LengthUnit.Centimeter);

[Benchmark]
[BenchmarkCategory("Transformation, Value")]
public double IQuantity_As_SI() => lengthIQuantity.As(UnitSystem.SI);

[Benchmark]
[BenchmarkCategory("Transformation, Quantity")]
public IQuantity IQuantity_ToUnit() => lengthIQuantity.ToUnit(LengthUnit.Centimeter);

[Benchmark]
[BenchmarkCategory("ToString")]
public string IQuantity_ToStringTest() => lengthIQuantity.ToString();
}

class Program
{
static void Main(string[] args)
{
var summary = BenchmarkRunner.Run<UnitsNetBenchmarks>();
}
=> BenchmarkSwitcher.FromAssembly(typeof(Program).Assembly).Run(args);
}
}
17 changes: 17 additions & 0 deletions UnitsNet.Benchmark/Scripts/json-export-all-runtimes.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
@echo off
SET scriptdir=%~dp0
SET projectdir="%scriptdir%..\.."
SET exportdir="%projectdir%\Artifacts\Benchmark"
:: this fails on the build server (also tested with the nightly benchmark.net package: 0.12.1.1533): possibly related to https://github.com/dotnet/BenchmarkDotNet/issues/1487
dotnet run --project "%projectdir%/UnitsNet.Benchmark" -c Release ^
--framework net5.0 ^
--runtimes net472 net48 netcoreapp2.1 netcoreapp3.1 netcoreapp50 ^
--artifacts=%exportdir% ^
--exporters json ^
--filter * ^
--iterationTime 250 ^
--statisticalTest 0.001ms ^
--join %1 %2 %3

:: this runs fine, however there is currently no way of displaying multiple-lines-per-chart: see https://github.com/rhysd/github-action-benchmark/issues/18
:: dotnet run --project "%scriptdir%/UnitsNet.Benchmark" -c Release -f net5.0 --runtimes netcoreapp31 netcoreapp50 --filter ** --artifacts="%scriptdir%/Artifacts/Benchmark" --exporters json
Loading