-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking #71
Benchmarking #71
Conversation
* Migratind old benchmark infra to ASV
* Everything is working now, but stil needs some cleanup
but different number of qubits * Removing old code
pass | ||
|
||
|
||
def time_ideal_quantum_volume(self, qobj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see where the qobj get passed into these functions being called?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, ASV calls it automatically.
It uses self.params
as a way to specify the parameters of the functions that is going to benchmark, so in this case, you'll see in the constructor that there's a:
...
self.params = (self.qv_circuits)
So the qv_circuits are a list of qobjs that are going to be passed as function parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment documenting that in TestSuite class/baseclass doc string?
mixed_unitary_noise_model, reset_noise_model, \ | ||
kraus_noise_model | ||
|
||
class SimpleU3TimeSuite: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could these *TimeSuite
classes all inherit from a base class, since most of the code is the same other than the init method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After talking to you about this, I'm gonna use self.params
to add the noise model as a new parameter, so we only need one function.
test/benchmark/simple_benchmarks.py
Outdated
) | ||
self.params = (self.circuits) | ||
|
||
def time_ideal_simple_u3(self, qobj): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think _u3
is supposed to be here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where would you put it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is testing CX not u3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh! you are right... copy / pasting side effects :)
""" | ||
qr = QuantumRegister(num_qubits) | ||
circuit = QuantumCircuit(qr) | ||
for i in range(num_qubits): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to time a single u3 gate vs qubit number, and average over which qubit in circuit is applied to.
I.e: Estimate time t[j]
of applying circuit.u3(qr[j])
, and then compute t_ave = mean(t)
and then plot t_ave
vs total number of qubits in the circuit.
This should also be done for cx
(averaged overall pairs of qubits) reset
and measure
(averaged over single qubits), and later we could add other specialized gates that have an optimised implementation.
This would allow estimating the single-shot run-time as if we know for n-qubits what t_ave_u3, t_ave_cx, t_ave_reset, t_ave_measure
we can just multiply them by the number of instructions of each type in the circuit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok!
Let's move this to a follow up PR :)
noise models as parameters * Minor style changes I've found
* Better docstring documentation * Improving ASV virutal environments generation
src/framework/types.hpp
Outdated
@@ -98,7 +98,7 @@ std::ostream &operator<<(std::ostream &out, const std::array<T, N> &v) { | |||
|
|||
// ostream overload for maps | |||
template <typename T1, typename T2, typename T3> | |||
std::ostream &operator<<(std::ostream &out, const std::map<T1, T2, T3> &m) { | |||
std::ostream &operator<<(std::oscmatrixeam &out, const std::map<T1, T2, T3> &m) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some typo going on here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
XD
Ok, we are creating a follow-up PR with the changes propose by @chriseclectic about the new benchmarks, and I'm gonna merge this as everything is green now :) |
Ups, I don't have review approval yet. @chriseclectic did I leave something out? |
* * Adding ASV as the framework for benchmarking * Migratind old benchmark infra to ASV * * Adding tools for benchmarks * * Adding params to the benchmarks * * Added new benchmarks for simple circuits with just u3 and cnots but different number of qubits * * Simplifying to just one function to benchmark, by moving the noise models as parameters * Minor style changes I've found * * Simplifying benchmark code by moving the noise models to params * Better docstring documentation * Improving ASV virutal environments generation
Summary
Introduce benchmarking system to detect possible regressions.
Details and comments
Performance is one the goals of our simulators, and the only way to ensure our simulators run at their highest speed is by tracking how they perform over time, so if any change causes a degradation in the performance, we need to know and act accordingly.
ASV (Airspeed Velocity) is a Pyhton framework for creating benchmarks, and is the one being used here to track our performance history, and warn us if we introduce a regression.
For now, all the benchmarks will run the QasmSimualtor, with three different noise models:
And the benchmarks are:
For the new benchmarks introduced by this PR, I haven't set any specific configuration for our threading strategy, so basically I'm delegating the concurrency to the heuristics we already have in place in the simulators.
These are the commands sequence to run the benchmarks and show the results:
html
formatAs a first attempt, we will only monitor for
time
measurements (how much time takes a circuit to run) and only when the simulator starts executing the circuit. Initial setup and circuit compilation through Terra won't be measured (as this will be measured in theQiskit
metapackage).Important!, I'm leaving the setting up of the workflow for publishing results to another Pull Request, this is all about having the scaffolding and a minor set of benchmarks in place.
TODO