You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the past I suggested that we could automatically detect performance regressions. There are some challenges with that because obviously some benchmarks can be noisy, may have to be manually excluded.
However recently Randy flagged a stats regression on railsbench. We wouldn't have spotted it if he hadn't noticed it. I was thinking that it may be easier to detect regressions in the "ratio in YJIT" than in execution time. So this might be a good place to have an automatic tripwire. We have a lot of benchmarks, so this give us lots of opportunities to detect regressions there.
I know that stats aren't always 100% deterministic, but they do tend to end up fairly stable, so in theory we could have a threshold of 1 or 2%. I believe it already is, but we should make sure that the number of iterations for the stats run is fixed to increase determinism. If some benchmarks have too much stats variability between runs, then we should either manually exclude them (annotation in benchmarks.yml?), or ideally identify and remove sources of non-determinism if possible (make sure all RNGs are seeded, etc).
The text was updated successfully, but these errors were encountered:
In the past I suggested that we could automatically detect performance regressions. There are some challenges with that because obviously some benchmarks can be noisy, may have to be manually excluded.
However recently Randy flagged a stats regression on railsbench. We wouldn't have spotted it if he hadn't noticed it. I was thinking that it may be easier to detect regressions in the "ratio in YJIT" than in execution time. So this might be a good place to have an automatic tripwire. We have a lot of benchmarks, so this give us lots of opportunities to detect regressions there.
I know that stats aren't always 100% deterministic, but they do tend to end up fairly stable, so in theory we could have a threshold of 1 or 2%. I believe it already is, but we should make sure that the number of iterations for the stats run is fixed to increase determinism. If some benchmarks have too much stats variability between runs, then we should either manually exclude them (annotation in benchmarks.yml?), or ideally identify and remove sources of non-determinism if possible (make sure all RNGs are seeded, etc).
The text was updated successfully, but these errors were encountered: