Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Reduce contention when using EszipV2 from multiple threads #108

Closed
wants to merge 2 commits into from

Conversation

andreubotella
Copy link

The EszipV2 struct guards the module map behind an Arc<Mutex>. This is needed to support streaming sources while returning a result before finishing parsing. However, this means every operation that needs access to the module map has to contend for that same lock.

This contention is inevitable while the streaming parsing is going on, but reads also contend with each other. This change avoids this read contention by switching the Mutex for a RwLock.

However, for pending modules, the get_module_source and get_module_source_map methods add a waker for the current async context to the module's waker list, and this would need some write locking. To avoid this, this change also guards the waker list behind a Mutex, which also reduces the scope of that contention by making it specific to a module.

With these changes, given a fully-resolved EszipV2 which isn't modified, all operations on it are now guaranteed to be lock-free.

The `EszipV2` struct guards the module map behind an `Arc<Mutex>`.
This is needed to support streaming sources while returning a result
before finishing parsing. However, this means every operation that
needs access to the module map has to contend for that same lock.

This contention is inevitable while the streaming parsing is going on,
but reads also contend with each other. This change avoids this read
contention by switching the `Mutex` for a `RwLock`.

However, for pending modules, the `get_module_source` and
`get_module_source_map` methods add a waker for the current async
context to the module's waker list, and this would need some write
locking. To avoid this, this change also guards the waker list behind
a `Mutex`, which also reduces the scope of that contention by making
it specific to a module.

With these changes, given a fully-resolved `EszipV2` which isn't
modified, all operations on it are now guaranteed to be lock-free.
@andreubotella
Copy link
Author

PTAL @lucacasonato @dsherret

Copy link
Member

@lucacasonato lucacasonato left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if this is actually a net positive - do you have benchmarks?

@andreubotella
Copy link
Author

andreubotella commented Feb 9, 2023

I put together a benchmark based on denoland/deno#17663, which is basically a stress test on loading a realistic module graph (the Fresh server). It's at https://github.com/andreubotella/deno-compile-worker-benchmark. Let's close this PR I guess.

With Mutex:

cpu: AMD Ryzen 7 3700X 8-Core Processor
runtime: deno 1.30.3 (x86_64-unknown-linux-gnu)

file:///home/abotella/Projects/forks/denoland/deno-compile-worker-benchmark/bench.ts
benchmark                 time (avg)             (min … max)       p75       p99      p995
------------------------------------------------------------ -----------------------------
parallel stress test  616.04 ms/iter  (609.5 ms … 625.98 ms) 618.57 ms 625.98 ms 625.98 ms
single test            99.64 ms/iter   (97.5 ms … 102.98 ms) 100.12 ms 102.98 ms 102.98 ms

With RwLock:

benchmark                 time (avg)             (min … max)       p75       p99      p995
------------------------------------------------------------ -----------------------------
parallel stress test   619.7 ms/iter (611.36 ms … 625.71 ms) 624.04 ms 625.71 ms 625.71 ms
single test           101.23 ms/iter  (98.92 ms … 103.71 ms) 101.82 ms 103.71 ms 103.71 ms

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants