Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create benchmarking suite for optimised models #128

Closed
lewtun opened this issue Apr 1, 2022 · 1 comment · Fixed by #194
Closed

Create benchmarking suite for optimised models #128

lewtun opened this issue Apr 1, 2022 · 1 comment · Fixed by #194

Comments

@lewtun
Copy link
Member

lewtun commented Apr 1, 2022

Now that we have tight Hub integration coming via #113, it could be useful to implement a simple benchmarking suite that allows users to:

  • Select a dataset on the Hub
  • Select a metric on the Hub
  • Select N models (could already be optimised models)
  • Optimise the models (if needed)
  • Report a table of results comparing the gains in latency and impact on the model metric

In a first step, we might simply want to benchmark latency with some dummy input at various sequence lengths etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants