Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce benchmarking API #2437

Merged
merged 1 commit into from
Dec 15, 2020
Merged

Introduce benchmarking API #2437

merged 1 commit into from
Dec 15, 2020

Conversation

abrown
Copy link
Contributor

@abrown abrown commented Nov 20, 2020

The new crate introduced here, wasmtime-bench-api, creates a shared library, e.g. wasmtime_bench_api.so, for executing Wasm benchmarks using Wasmtime. It allows us to measure several phases separately by exposing engine_compile_module, engine_instantiate_module, and engine_execute_module, which pass around an opaque pointer to the internally initialized state. This state is initialized and freed by engine_create and engine_free, respectively. The API also introduces a way of passing in functions to satisfy the "bench" "start" and "bench" "end" symbols that we expect Wasm benchmarks to import. The API is exposed in a C-compatible way so that we can dynamically load it (carefully) in our benchmark runner.

crates/bench-api/Cargo.toml Outdated Show resolved Hide resolved
@abrown
Copy link
Contributor Author

abrown commented Nov 20, 2020

I created a draft based on @fitzgen's comment re: a Wasmtime benchmark dylib. I don't really expect this to be the final product but I wanted to make sure that I could use the API we discussed over in sightglass (and what better way to know than to actually build it?). I am not quite sure about:

  • there are a lot of unsafe bits here due to the C FFI and I would appreciate someone verifying that I didn't miss something
  • having the "bench" "start"/"end" functions have a signature of fn() is very convenient for the benchmarks but I will need to be very careful in Sightglass that the measurement state, e.g. the start time, is handled safely.

@abrown abrown force-pushed the bench-api branch 2 times, most recently from 5b1bdbb to 0567e6e Compare November 20, 2020 19:56
Copy link
Member

@fitzgen fitzgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

We should hold off landing this until the RFC is accepted, but I don't think there is too much to worry about as far as bit rot goes or anything like that.

Thanks @abrown!

Comment on lines +101 to +107
/// Opaque pointer type for hiding the engine state details.
#[repr(C)]
pub struct OpaqueEngineState {
_private: [u8; 0],
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to actually have a new type here. We should be able to pass *mut EngineState out and expect that callers won't know how to access its fields or anything.

struct EngineState<'a> {
bytes: &'a [u8],
engine: Engine,
store: Store,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worth noting in a comment that if we ever want to measure more than one instantiation per process, we will probably want to create a new store for each instantiation.

@jlb6740
Copy link
Contributor

jlb6740 commented Dec 1, 2020

The new crate introduced here, wasmtime-bench-api, creates a shared library, e.g. wasmtime_bench_api.so, for executing Wasm benchmarks using Wasmtime. It allows us to measure several phases separately by exposing engine_compile_module, engine_instantiate_module, and engine_execute_module, which pass around an opaque pointer to the internally initialized state. This state is initialized and freed by engine_create and engine_free, respectively. The API also introduces a way of passing in functions to satisfy the "bench" "start" and "bench" "end" symbols that we expect Wasm benchmarks to import. The API is exposed in a C-compatible way so that we can dynamically load it (carefully) in our benchmark runner.

Hi @abrown, to be clear, so this is adding a crate that when built will build a shared library that includes everything for embedding enough of the wasmtime runtime into whatever (sightglass for example) for instantiating and running a wasm file? Doesn't the c-api already expose api's to load and instantiate and execute modules? If so, what are the notable differences with what's here? Also the bench_start and bench_end .. if someone has a C program or rust files and want to add these markers around a specific function and recompile to wasm .. how is this shared library that is produced here used in that process?

@abrown
Copy link
Contributor Author

abrown commented Dec 1, 2020

this is adding a crate that when built will build a shared library that includes everything for embedding enough of the wasmtime runtime into whatever (sightglass for example) for instantiating and running a wasm file?

Yes.

Doesn't the c-api already expose api's to load and instantiate and execute modules? If so, what are the notable differences with what's here?

It does, but this is quite a bit simpler. I did create bindings to use the Wasm C API from Rust but this was quite a bit more work than this approach (a lot of unsafe things are yet to be figured out there). Either way is fine with me.

Also the bench_start and bench_end .. if someone has a C program or rust files and want to add these markers around a specific function and recompile to wasm .. how is this shared library that is produced here used in that process?

This shared library is decoupled from that completely. That way benchmark artifacts can be created apart from how they are run. There is an example of how this works here

abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
abrown added a commit to abrown/sightglass that referenced this pull request Dec 10, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
The new crate introduced here, `wasmtime-bench-api`, creates a shared library, e.g. `wasmtime_bench_api.so`, for executing Wasm benchmarks using Wasmtime. It allows us to measure several phases separately by exposing `engine_compile_module`, `engine_instantiate_module`, and `engine_execute_module`, which pass around an opaque pointer to the internally initialized state. This state is initialized and freed by `engine_create` and `engine_free`, respectively. The API also introduces a way of passing in functions to satisfy the `"bench" "start"` and `"bench" "end"` symbols that we expect Wasm benchmarks to import. The API is exposed in a C-compatible way so that we can dynamically load it (carefully) in our benchmark runner.
abrown added a commit to bytecodealliance/sightglass that referenced this pull request Dec 11, 2020
This change brings in work done in a separate repository to create a new benchmark runner for sightglass. The original code is retained with a `legacy` moniker where appropriate; the new runner uses `next`. The `README.md` now contains a description of the different components present in the repository and links to documentation or implementation for each. In writing this, it clarified for me that a reorganization (in a future PR) is needed.

The new runner employs a different paradigm than the old one:
 - benchmarks are Wasm files, not shared libraries, built using Dockerfiles and committed in-tree (accompanied by their WAT pair) under `benchmarks-next`; they   import `bench.start` and `bench.end` functions which are placed around the section of code to measure
 - engines (i.e. Wasm VMs) are shared libraries, like the one introduced in bytecodealliance/wasmtime#2437, allowing the runner to control the compilation, instantiation, and execution of the Wasm benchmark
 - the building and running of the benchmarks is controlled with a CLI tool, see `cargo +nightly run -- --help` and `docs/next.md` for more information
Copy link
Member

@fitzgen fitzgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Talked this over with @abrown and we're going to merge this now so that we can continue our benchmark prototyping experiments, and it is easier to collaborate on any changes we might need to make to this benchmarking API.

@fitzgen fitzgen marked this pull request as ready for review December 15, 2020 17:42
@fitzgen fitzgen merged commit 48b401c into bytecodealliance:main Dec 15, 2020
@abrown abrown deleted the bench-api branch December 15, 2020 17:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants