Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cperf] Update README to reflect recent changes. #246

Merged
merged 1 commit into from
Sep 11, 2014
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 52 additions & 23 deletions src/scripts/cperf/README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,64 @@
This directory contains scripts whose purpose is to continuously measure
performance by automatically running benchmarks and plotting their
results.
# Continuous performance benchmarks: `cperf.sh`

The `cperf.sh` script is intended to be called either manually or by a
`post-merge` *Git hook* in an arbitrary *slave repository* which is to be
periodically updated (e.g. `git pull`) by a *cronjob*.
The `cperf.sh` script can be used to run, compare and visualize how a set
of commits perform on a set of benchmarks. It is intended to be called
either manually, by a `post-merge` *Git hook* in an arbitrary *slave
repository* which is to be periodically updated (e.g. `git pull`) by a
*cronjob* or by *SnabbBot*.

`cperf.sh` executes *benchmark scripts* for each *merge commit* in a
*commit range* as recognized by Git and records their results
## Usage

`cperf.sh` has two execution modes: `check` and `plot`. In `check` mode
`cperf.sh` accepts a set of *commit hashes*. It will run the *benchmark
scripts* for each commit hash and print the runs results. This mode is
suited to test and verify performance changes during development.

For instance if you branch off from e.g. `master` and want to test
performance improvements of your branch `performance-improved` you
would call `cperf.sh` like so:

```
$ cperf.sh check master performance-improved
Comparing with SAMPLESIZE=5
(benchmark, abbrev. sha1 sum, mean score, standard deviation)"
your_benchmark master 10 0.1
your_benchmark perfromance-improved 12 0.5
```

In `plot` mode `cperf.sh` will run your benchmark scripts for each *merge
commit* in a *commit range* as recognized by Git and record their results
continuously (e.g. benchmarks will *not* be run again for already
benchmarked commits) which are then used to produce a linear graph plot
using *Gnuplot*. For instance, to produce a plot for the commits starting
from `e14f3` you would call `cperf.sh` like so: `cperf.sh HEAD...e14f3`
benchmarked commits). The results are then used to produce a linear graph
plot using *Gnuplot*. `plot` mode will populate a sub-directory
`results/` in `CPERFDIR` which will contain a file `benchmarks.png` (the
resulting plot). See *Requirements* below. For instance, to produce a
plot for the commits starting from `e14f3` you would call `cperf.sh` like
so:

A *benchmark script* is defined to be an *executable program* that prints
a single floating point number to `stdout` and exits with a meaningful
status. E.g. if the benchmark fails, its *benchmark script* should exit
with a status `!= 0`. A collection of possible *benchmark scripts* can be
found in [benchmarks/](benchmarks/).
```
cperf.sh plot HEAD...e14f3
```

In both modes `cperf.sh` will run each benchmark multiple times for each
commit hash in order to compute mean and standard derivation values. You
can adjust the `SAMPLESIZE` environment variable (which defaults to 5) in
order to control the number of times each benchmark is run.

## Requirements

You will need to create a dedicated directory for use by `cperf.sh` and
set the `CPERFDIR` environment variable to point to that directory. This
directory must contain a `benchmarks/` sub-directory which must contain
the *benchmark scripts* to be evaluated. Another sub-directory `results/`
will be populated by `cperf-hook.sh`. It will contain a file
`benchmarks.png` which will be the resulting plot.
the *benchmark scripts* to be evaluated. A collection of *benchmark
scripts* for use with `cperf.sh` can be found in
[benchmarks/](benchmarks/).

The `cperf.sh` will run each benchmark multiple times for each *merge
commit* in order to compute mean and standard derivation values. You can
adjust the `SAMPLESIZE` environment variable (which defaults to 5) in
order to control the number of times each benchmark is run.
## Interface

A *benchmark script* is defined to be an *executable program* that prints
a single floating point number to `stdout` and exits with a meaningful
status. E.g. if the benchmark fails, its *benchmark script* should exit
with a status `!= 0`.

## Note regarding the `loadgen` benchmark

Expand Down
2 changes: 1 addition & 1 deletion src/scripts/cperf/cperf.sh
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ function plot_mode {
function check_mode {

# Print header.
echo "Comparing with SAMPLESIZE=$SAMPLESIZE)"
echo "Comparing with SAMPLESIZE=$SAMPLESIZE"
echo "(benchmark, abbrev. sha1 sum, mean score, standard deviation)"

# Iterate over benchmarks.
Expand Down