-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP/RFC] Start benchmarking #69
Conversation
Codecov Report@@ Coverage Diff @@
## master #69 +/- ##
=======================================
Coverage 82.96% 82.96%
=======================================
Files 4 4
Lines 182 182
=======================================
Hits 151 151
Misses 31 31 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea. Are you thinking that performance loss will cause a test failure? I've noticed in ImageCore that Travis is not great for performance-testing, though it's partially due to bounds-checking during the running of tests. (That wouldn't affect this package.)
|
||
for FT in (Q0f7, Q1f14, Q7f24, N0f8, N2f14, N0f32, N2f30, N0f64, N2f62) | ||
x = FT(0.25) | ||
# Float16 doesn't behave well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think only on julia 0.5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah... We (I) did change those semantics on v0.6
# faster and more reliable than re-tuning `suite` every time the file is included | ||
paramspath = Pkg.dir("FixedPointNumbers", "benchmark", "params.jld") | ||
# tune!(suite); JLD.save(paramspath, "suite", params(suite)); | ||
loadparams!(suite, JLD.load(paramspath, "suite"), :evals, :samples); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably check to see if the parameters have been generated locally, and if not, generate them.
You don't want to check in params.jld
- the whole point of the benchmark parameters tuning process is that BenchmarkTools is trying to estimate the most stable/efficient experimental configuration for your machine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I do realize this comes from BenchmarkTools example, so I should probably update that to reflect what I'm actually saying here...
For future reference, here is the current situation in 2019. The benchmark results are highly dependent on the CPU throttling and cache. Therefore, it makes little sense to calculate with the same inputs using JLD. In addition to the above, SIMD is an important factor of the computing speed on modern CPUs. The benchmarks of a single operation can easily mislead us. Although Julia may give up the optimization due to just a little things (cf. PR #145). Since the cause of the slowdown is unlikely to appear in the scores, we will need to manage our score book carefully. |
Agreed with all these reservations. Of course, they seem fixable, it will just take work to write the diversity of tests that encapsulate all the nuances. |
I won't have time to finish this and I haven't looked at FixedPointNumbers.jl in a while |
I would like spend some time on FixedPointNumbers one particular goal will be to get rid of as many
@generated
functions as possible, since we no longer need to support v0.4.But before I start doing that we should establish a set of benchmarks that are useful and important to users of FixedPointNumbers.
If you have any particular suggestions please add them in comments or open PRs directly to this branch.
cc: @jrevels for his expertise.