Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are we park yet? #13

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
--target ${{ env.NO_STD_TARGET }}
--no-dev-deps
--feature-powerset
--skip yield,thread_local
--skip yield,thread_local,parking

msrv:
name: MSRV
Expand Down Expand Up @@ -108,6 +108,8 @@ jobs:
run: cargo run --example thread_local --features thread_local
- name: Run lock_api example
run: cargo run --example lock_api --features lock_api,barging
- name: Run parking with thread_local example
run: cargo run --example parking --features parking,thread_local

linter:
name: Linter
Expand Down
11 changes: 10 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,18 @@ categories = ["algorithms", "concurrency", "no-std", "no-std::no-alloc"]
keywords = ["mutex", "no_std", "spinlock", "synchronization"]

[features]
# NOTE: Features `yield`, `thread_local` require std.
# NOTE: Features `yield`, `thread_local` and `parking` require std.
yield = []
thread_local = []
barging = []
# NOTE: The `dep:` syntax requires Rust 1.60.
parking = ["dep:atomic-wait"]
lock_api = ["dep:lock_api"]

[dependencies.atomic-wait]
version = "1"
optional = true

[dependencies.lock_api]
version = "0.4"
default-features = false
Expand All @@ -44,6 +49,10 @@ check-cfg = ["cfg(loom)", "cfg(tarpaulin)", "cfg(tarpaulin_include)"]
name = "barging"
required-features = ["barging"]

[[example]]
name = "parking"
required-features = ["parking", "thread_local"]

[[example]]
name = "thread_local"
required-features = ["thread_local"]
Expand Down
2 changes: 1 addition & 1 deletion Makefile.toml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ args = [
"--feature-powerset",
"--no-dev-deps",
"--skip",
"yield,thread_local",
"yield,thread_local,parking",
]
dependencies = ["install-no-std-target"]

Expand Down
88 changes: 72 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@
![No_std][no_std-badge]

MCS lock is a List-Based Queuing Lock that avoids network contention by having
threads spin on local memory locations. The main properties of this mechanism are:
threads spin and/or park on local memory locations. The main properties of this
mechanism are:

- guarantees FIFO ordering of lock acquisitions;
- spins on locally-accessible flag variables only;
Expand All @@ -24,9 +25,10 @@ paper. And a simpler correctness proof of the MCS lock was proposed by
## Spinlock use cases

It is noteworthy to mention that [spinlocks are usually not what you want]. The
majority of use cases are well covered by OS-based mutexes like [`std::sync::Mutex`]
and [`parking_lot::Mutex`]. These implementations will notify the system that the
waiting thread should be parked, freeing the processor to work on something else.
majority of use cases are well covered by OS-based mutexes like
[`std::sync::Mutex`] or [`parking_lot::Mutex`] or even this crate's [`parking`]
Mutexes. These implementations will notify the system that the waiting thread
should be parked, freeing the processor to work on something else.

Spinlocks are only efficient in very few circumstances where the overhead
of context switching or process rescheduling are greater than busy waiting
Expand Down Expand Up @@ -160,6 +162,42 @@ fn main() {
}
```

## Locking with thread parking MCS locks

This crate also supports MCS lock implementations that will put the blocking
threads to sleep. All `no_std` flavors: `raw`, `barging` have matching `Mutex`
types under the [`parking`] module, with corresponding paths and public APIs,
that are thread parking capable. These implementations are not `no_std`
compatible. See [`parking`] module for more information.

```rust
use std::sync::Arc;
use std::thread;

// Requires `parking` feature.
// Spins for a while then parks during contention.
use mcslock::parking::raw::{spins::Mutex, MutexNode};

// Requires `parking` and `thread_local` features.
mcslock::thread_local_parking_node!(static NODE);

fn main() {
let mutex = Arc::new(Mutex::new(0));
let c_mutex = Arc::clone(&mutex);

thread::spawn(move || {
// Local node handles are provided by reference.
// Critical section must be defined as a closure.
c_mutex.lock_with_local_then(&NODE, |data| *data = 10);
})
.join().expect("thread::spawn failed");

// A node may also be transparently allocated in the stack.
// Critical section must be defined as a closure.
assert_eq!(mutex.try_lock_then(|data| *data.unwrap()), 10);
}
```

## Features

This crate dos not provide any default features. Features that can be enabled
Expand All @@ -178,26 +216,36 @@ just simply busy-waits. This feature is not `no_std` compatible.

### thread_local

The `thread_local` feature enables [`raw::Mutex`] locking APIs that operate over
queue nodes that are stored at the thread local storage. These locking APIs
require a static reference to [`raw::LocalMutexNode`] keys. Keys must be generated
by the [`thread_local_node!`] macro. This feature also enables memory optimizations
for [`barging::Mutex`] and locking operations. This feature is not `no_std`
The `thread_local` feature enables [`raw::Mutex`] and [`parking::raw::Mutex`]
locking APIs that operate over queue nodes that are stored at the thread local
storage. These locking APIs require a static reference to [`raw::LocalMutexNode`]
and [`parking::raw::LocalMutexNode`] keys respectively. Keys must be generated
by the [`thread_local_node!`] and [`thread_local_parking_node!`] macros. This
feature also enables memory optimizations for [`barging::Mutex`] and
[`parking::barging::Mutex`] locking operations. This feature is not `no_std`
compatible.

### barging

The `barging` feature provides locking APIs that are compatible with the
[lock_api] crate. It does not require node allocations from the caller.
The [`barging`] module is suitable for `no_std` environments. This implementation
is not fair (does not guarantee FIFO), but can improve throughput when the lock
is heavily contended.
The `barging` feature provides locking APIs that are compatible with the [lock_api]
crate. It does not require node allocations from the caller. The [`barging`] module
is suitable for `no_std` environments, but [`parking::barging`] is not. This
implementation is not fair (does not guarantee FIFO), but can improve throughput
when the lock is heavily contended.

### lock_api

This feature implements the [`RawMutex`] trait from the [lock_api] crate for
[`barging::Mutex`]. Aliases are provided by the [`barging::lock_api`] (`no_std`)
module.
both [`barging::Mutex`] and [`parking::barging::Mutex`]. Aliases are provided by
the [`barging::lock_api`] (`no_std`) and [`parking::barging::lock_api`] modules.

### parking

The `parking` feature provides Mutex implementations that are capable of putting
blocking threads waiting for the lock to sleep. These implementations are
published under the [`parking`] module. Each `no_std` mutex flavors provided
by this crate have corresponding parking implementations under that module.
Users may select a out of the box parking policy at [`parking::park`].

## Minimum Supported Rust Version (MSRV)

Expand Down Expand Up @@ -258,10 +306,18 @@ each of your dependencies, including this one.
[`raw::Mutex`]: https://docs.rs/mcslock/latest/mcslock/raw/struct.Mutex.html
[`raw::MutexNode`]: https://docs.rs/mcslock/latest/mcslock/raw/struct.MutexNode.html
[`raw::LocalMutexNode`]: https://docs.rs/mcslock/latest/mcslock/raw/struct.LocalMutexNode.html
[`parking`]: https://docs.rs/mcslock/latest/mcslock/parking/index.html
[`parking::park`]: https://docs.rs/mcslock/latest/mcslock/parking/park/index.html
[`parking::barging`]: https://docs.rs/mcslock/latest/mcslock/parking/barging/index.html
[`parking::lock_api`]: https://docs.rs/mcslock/latest/mcslock/parking/lock_api/index.html
[`parking::raw::Mutex`]: https://docs.rs/mcslock/latest/mcslock/parking/raw/struct.Mutex.html
[`parking::raw::LocalMutexNode`]: https://docs.rs/mcslock/latest/mcslock/parking/raw/struct.LocalMutexNode.html
[`parking::barging::Mutex`]: https://docs.rs/mcslock/latest/mcslock/parking/barging/struct.Mutex.html
[`barging`]: https://docs.rs/mcslock/latest/mcslock/barging/index.html
[`barging::lock_api`]: https://docs.rs/mcslock/latest/mcslock/barging/lock_api/index.html
[`barging::Mutex`]: https://docs.rs/mcslock/latest/mcslock/barging/struct.Mutex.html
[`thread_local_node!`]: https://docs.rs/mcslock/latest/mcslock/macro.thread_local_node.html
[`thread_local_parking_node!`]: https://docs.rs/mcslock/latest/mcslock/macro.thread_local_parking_node.html

[`std::sync::Mutex`]: https://doc.rust-lang.org/std/sync/struct.Mutex.html
[`std::thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
Expand Down
56 changes: 56 additions & 0 deletions examples/parking.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
use std::sync::mpsc::channel;
use std::sync::Arc;
use std::thread;

// Requires `parking` feature.
// `spins::Mutex` spins for a while then parks during contention.
use mcslock::parking::raw::{spins::Mutex, MutexNode};

// Requires that the `thread_local` feature is enabled.
mcslock::thread_local_parking_node! {
// * Allows multiple static definitions, must be separated with semicolons.
// * Visibility is optional (private by default).
// * Requires `static` keyword and a UPPER_SNAKE_CASE name.
pub static NODE;
static UNUSED_NODE;
}

fn main() {
const N: usize = 10;

// Spawn a few threads to increment a shared variable (non-atomically), and
// let the main thread know once all increments are done.
//
// Here we're using an Arc to share memory among threads, and the data inside
// the Arc is protected with a mutex.
let data = Arc::new(Mutex::new(0));

let (tx, rx) = channel();
for _ in 0..N {
let (data, tx) = (data.clone(), tx.clone());
thread::spawn(move || {
// A queue node must be mutably accessible.
let mut node = MutexNode::new();
// The shared state can only be accessed once the lock is held.
// Our non-atomic increment is safe because we're the only thread
// which can access the shared state when the lock is held.
//
// We unwrap() the return value to assert that we are not expecting
// threads to ever fail while holding the lock.
data.lock_with_then(&mut node, |data| {
*data += 1;
if *data == N {
tx.send(()).unwrap();
}
// The lock is unlocked here at the end of the closure scope.
});
// The node can now be reused for other locking operations.
let _ = data.lock_with_then(&mut node, |data| *data);
});
}
let _message = rx.recv();

// A thread local node is borrowed.
let count = data.lock_with_local_then(&NODE, |data| *data);
assert_eq!(count, N);
}
2 changes: 2 additions & 0 deletions examples/thread_local.rs
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ fn main() {
// threads to ever fail while holding the lock.
//
// Data is exclusively accessed by the guard argument.
// A thread local node is borrowed.
data.lock_with_local_then(&NODE, |data| {
*data += 1;
if *data == N {
Expand All @@ -46,6 +47,7 @@ fn main() {
}
let _message = rx.recv();

// A thread local node is borrowed.
let count = data.lock_with_local_then(&NODE, |data| *data);
assert_eq!(count, N);
}
Loading