Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP/RFC] Start benchmarking #69

Closed
wants to merge 1 commit into from
Closed

[WIP/RFC] Start benchmarking #69

wants to merge 1 commit into from

Conversation

vchuravy
Copy link
Collaborator

I would like spend some time on FixedPointNumbers one particular goal will be to get rid of as many @generated functions as possible, since we no longer need to support v0.4.

But before I start doing that we should establish a set of benchmarks that are useful and important to users of FixedPointNumbers.

If you have any particular suggestions please add them in comments or open PRs directly to this branch.

cc: @jrevels for his expertise.

@codecov-io
Copy link

codecov-io commented Jan 30, 2017

Codecov Report

Merging #69 into master will not impact coverage.

@@           Coverage Diff           @@
##           master      #69   +/-   ##
=======================================
  Coverage   82.96%   82.96%           
=======================================
  Files           4        4           
  Lines         182      182           
=======================================
  Hits          151      151           
  Misses         31       31

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7188a96...a5a649f. Read the comment docs.

Copy link
Member

@timholy timholy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea. Are you thinking that performance loss will cause a test failure? I've noticed in ImageCore that Travis is not great for performance-testing, though it's partially due to bounds-checking during the running of tests. (That wouldn't affect this package.)


for FT in (Q0f7, Q1f14, Q7f24, N0f8, N2f14, N0f32, N2f30, N0f64, N2f62)
x = FT(0.25)
# Float16 doesn't behave well
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think only on julia 0.5

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah... We (I) did change those semantics on v0.6

# faster and more reliable than re-tuning `suite` every time the file is included
paramspath = Pkg.dir("FixedPointNumbers", "benchmark", "params.jld")
# tune!(suite); JLD.save(paramspath, "suite", params(suite));
loadparams!(suite, JLD.load(paramspath, "suite"), :evals, :samples);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably check to see if the parameters have been generated locally, and if not, generate them.

You don't want to check in params.jld - the whole point of the benchmark parameters tuning process is that BenchmarkTools is trying to estimate the most stable/efficient experimental configuration for your machine.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I do realize this comes from BenchmarkTools example, so I should probably update that to reflect what I'm actually saying here...

@kimikage
Copy link
Collaborator

For future reference, here is the current situation in 2019.

The benchmark results are highly dependent on the CPU throttling and cache. Therefore, it makes little sense to calculate with the same inputs using JLD.

In addition to the above, SIMD is an important factor of the computing speed on modern CPUs. The benchmarks of a single operation can easily mislead us. Although code_llvm and code_native are powerful and helpful, they can also mislead us (cf. #138 (review)).

Julia may give up the optimization due to just a little things (cf. PR #145). Since the cause of the slowdown is unlikely to appear in the scores, we will need to manage our score book carefully.

@timholy
Copy link
Member

timholy commented Nov 30, 2019

Agreed with all these reservations. Of course, they seem fixable, it will just take work to write the diversity of tests that encapsulate all the nuances.

@kimikage kimikage marked this pull request as draft July 18, 2020 22:50
@vchuravy
Copy link
Collaborator Author

I won't have time to finish this and I haven't looked at FixedPointNumbers.jl in a while

@vchuravy vchuravy closed this Jul 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants