-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stylus_benchmark #2827
stylus_benchmark #2827
Conversation
This reverts commit 4fd9d10.
|
||
let exec = &mut WasmEnv::default(); | ||
|
||
let module = exec_program( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this code benchmarking only the JIT execution? I'm worried about the differences between the JIT and the interpreter, so it would be nice to benchmark non-JIT execution as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good point.
By interpreter you mean execution through the prover's binary?
@tsahee, any takes on that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the prover the bottleneck is not how long it takes the prover to execute but how many steps it takes - because that affects how long it will take to hash for BOLD.
And in stylus, that's a very different constraint.
I intend to add more scenarios to be benchmarked in a future PR, together with a more detailed look at results. But here goes the output of the benchmarks implemented in this PR, that was run in a Apple M3 Pro:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I think measuring the interpreter (prover) performance is important as well, but you can leave this for a future PR.
exec_program(exec, module, calldata, config, evm_data, gas) | ||
} | ||
|
||
pub fn exec_program( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exec_program vs start_program is very confusing.
Maybe launch_program_thread?
let args = Args::parse(); | ||
|
||
match args.scenario { | ||
Some(scenario) => handle_scenario(scenario, args.output_wat_dir_path), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should have some option (either general --help or --schenario help) that lists all scenarios
|
||
#[derive(StructOpt)] | ||
#[structopt(name = "jit-prover")] | ||
pub struct Opts { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does that really need to be in lib.rs? I don't see it used elsewhere?
Resolves NIT-2757
This PR adds the stylus_benchmark binary.
It will mainly be used to fine tune the ink prices that are charged today.
It deterministically creates .wat programs with lots of instructions to be benchmarked.
If this PR is approved then more .wat programs generations, benchmarking other instructions, will be added in a future PR.
This PR introduces two new WASM instruction to jit, start_benchmark and end_benchmark.
Code blocks between start_benchmark and end_benchmark instructions will be benchmarked.
stylus_benchmark uses jit as a library.