-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce benchmarking API #2437
Conversation
I created a draft based on @fitzgen's comment re: a Wasmtime benchmark dylib. I don't really expect this to be the final product but I wanted to make sure that I could use the API we discussed over in sightglass (and what better way to know than to actually build it?). I am not quite sure about:
|
5b1bdbb
to
0567e6e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
We should hold off landing this until the RFC is accepted, but I don't think there is too much to worry about as far as bit rot goes or anything like that.
Thanks @abrown!
/// Opaque pointer type for hiding the engine state details. | ||
#[repr(C)] | ||
pub struct OpaqueEngineState { | ||
_private: [u8; 0], | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to actually have a new type here. We should be able to pass *mut EngineState
out and expect that callers won't know how to access its fields or anything.
struct EngineState<'a> { | ||
bytes: &'a [u8], | ||
engine: Engine, | ||
store: Store, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth noting in a comment that if we ever want to measure more than one instantiation per process, we will probably want to create a new store for each instantiation.
Hi @abrown, to be clear, so this is adding a crate that when built will build a shared library that includes everything for embedding enough of the wasmtime runtime into whatever (sightglass for example) for instantiating and running a wasm file? Doesn't the c-api already expose api's to load and instantiate and execute modules? If so, what are the notable differences with what's here? Also the bench_start and bench_end .. if someone has a C program or rust files and want to add these markers around a specific function and recompile to wasm .. how is this shared library that is produced here used in that process? |
Yes.
It does, but this is quite a bit simpler. I did create bindings to use the Wasm C API from Rust but this was quite a bit more work than this approach (a lot of unsafe things are yet to be figured out there). Either way is fine with me.
This shared library is decoupled from that completely. That way benchmark artifacts can be created apart from how they are run. There is an example of how this works here |
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
The new crate introduced here, `wasmtime-bench-api`, creates a shared library, e.g. `wasmtime_bench_api.so`, for executing Wasm benchmarks using Wasmtime. It allows us to measure several phases separately by exposing `engine_compile_module`, `engine_instantiate_module`, and `engine_execute_module`, which pass around an opaque pointer to the internally initialized state. This state is initialized and freed by `engine_create` and `engine_free`, respectively. The API also introduces a way of passing in functions to satisfy the `"bench" "start"` and `"bench" "end"` symbols that we expect Wasm benchmarks to import. The API is exposed in a C-compatible way so that we can dynamically load it (carefully) in our benchmark runner.
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed. The new runner employs a different paradigm than the old one: - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they import `bench.start` and `bench.end` functions which are placed around the section of code to measure - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Talked this over with @abrown and we're going to merge this now so that we can continue our benchmark prototyping experiments, and it is easier to collaborate on any changes we might need to make to this benchmarking API.
The new crate introduced here,
wasmtime-bench-api
, creates a shared library, e.g.wasmtime_bench_api.so
, for executing Wasm benchmarks using Wasmtime. It allows us to measure several phases separately by exposingengine_compile_module
,engine_instantiate_module
, andengine_execute_module
, which pass around an opaque pointer to the internally initialized state. This state is initialized and freed byengine_create
andengine_free
, respectively. The API also introduces a way of passing in functions to satisfy the"bench" "start"
and"bench" "end"
symbols that we expect Wasm benchmarks to import. The API is exposed in a C-compatible way so that we can dynamically load it (carefully) in our benchmark runner.