Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarks #787

Open
Herschel opened this issue Jul 1, 2020 · 3 comments
Open

Add benchmarks #787

Herschel opened this issue Jul 1, 2020 · 3 comments
Labels
A-build Area: Build scripts & CI

Comments

@Herschel
Copy link
Member

Herschel commented Jul 1, 2020

We should decide on a benchmarking method so that we can measure changes to SWF parsing and the AVM (for example, when we improve our string type, cache string hashes, watch for regressions, etc.).

Options:

  • cargo bench but this requires nightly and is somewhat limited.
  • bencher is a stable port of the above.
  • criterion, runs on stable, but somewhat heavy.

Questions:

  • Can we use any of the above easily on the wasm target? (Doesn't seem like it.)
@Herschel Herschel added actionscript A-build Area: Build scripts & CI labels Jul 1, 2020
@iwannabethedev
Copy link
Contributor

Adding to this issue, as Ruffle is maturing, I think that this issue would be even more valuable. One feature that I think could be very helpful is if performance was automatically profiled/tested, with graphs automatically generated and published online, with commits on the x-axis and different performance metrics on the y-axis. That way, regressions in performance should be easier to spot, and likewise for improvements, and a better overall understanding should be made more feasible. That said, even manual profiling can be very useful.

However, as already described in the issue, there can be different challenges (credit to @Dinnerbone in helping investigate this), including:

  • Consistent, non-varying performance: If performance is to be tracked across commits, consistent performance (both during profiling execution and between profiling runs) is likely necessary for multiple of the approaches for the generated data to be useful and not misleading. This can be a problem for, for instance, Github Runners, which likely can vary both during profiling execution and between profiling runs (there may or may not be ways to mitigate this sufficiently for our needs). Dedicated hardware might help solve this, but that requires resources, money, infrastructure and time to set up and maintain. If approaches that mitigates or handles the issue of consistent, non-varying performance, it might make setting this up substantially easier and more feasible.
  • Infrastructure: Setting automatic profiling up will likely require at least some investment and maintenance in infrastructure, though approaches that avoids or mitigates this (if such exist) will likely be very helpful.
  • Platforms that can be supported for profiling: Can a given approach or technology profile both web and desktop? For desktop, which OSs? What about the different renderer backends like WGPU, WebGL, Canvas? That said, supporting all these platforms and technologies may not be necessary. Prioritizing a few of these technologies and platforms might give a lot of value, also for the technologies and platforms that were not targeted for profiling (since an optimization for one platform might also work well on a different platform, and some optimizations help performance in general).

Extra tools and technologies that may be relevant:

Additional search keywords for this issue: Performance, profiling, profiler, speed, graphics.

@torokati44
Copy link
Member

Referencing: #9045, #6071

@iwannabethedev
Copy link
Contributor

Also: #3432 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-build Area: Build scripts & CI
Projects
None yet
Development

No branches or pull requests

3 participants