Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update I/O threads documentation #222

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion topics/benchmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,7 +266,7 @@ in account.
+ Valkey commands return an acknowledgment for all usual commands. Some other data stores do not. Comparing Valkey to stores involving one-way queries is only mildly useful.
+ Naively iterating on synchronous Valkey commands does not benchmark Valkey itself, but rather measure your network (or IPC) latency and the client library intrinsic latency. To really test Valkey, you need multiple connections (like valkey-benchmark) and/or to use pipelining to aggregate several commands and/or multiple threads or processes.
+ Valkey is an in-memory data store with some optional persistence options. If you plan to compare it to transactional servers (MySQL, PostgreSQL, etc ...), then you should consider activating AOF and decide on a suitable fsync policy.
+ Valkey is, mostly, a single-threaded server from the POV of commands execution (actually modern versions of Valkey use threads for different things). It is not designed to benefit from multiple CPU cores. People are supposed to launch several Valkey instances to scale out on several cores if needed. It is not really fair to compare one single Valkey instance to a multi-threaded data store.
+ Valkey primarily operates as a single-threaded server from the POV of commands execution. While the server can employ threads for I/O operations and command parsing, the core command execution remains sequential. For CPU-intensive workloads requiring multiple cores, users should consider running multiple Valkey instances in parallel. It is not really fair to compare one single Valkey instance to a multi-threaded data store.

The `valkey-benchmark` program is a quick and useful way to get some figures and
evaluate the performance of a Valkey instance on a given hardware. However,
Expand Down
4 changes: 0 additions & 4 deletions topics/encryption.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,3 @@ versions, ciphers and cipher suites, etc. Please consult the self documented
### Performance considerations

TLS adds a layer to the communication stack with overheads due to writing/reading to/from an SSL connection, encryption/decryption and integrity checks. Consequently, using TLS results in a decrease of the achievable throughput per Valkey instance.

### Limitations

I/O threading is currently not supported with TLS.
16 changes: 6 additions & 10 deletions topics/latency.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,22 +156,18 @@ as the main event loop.
In most situations, these kind of system level optimizations are not needed.
Only do them if you require them, and if you are familiar with them.

Single threaded nature of Valkey
Valkey sequential command execution
-------------------------------

Valkey uses a *mostly* single threaded design. This means that a single process
serves all the client requests, using a technique called **multiplexing**.
This means that Valkey can serve a single request in every given moment, so
all the requests are served sequentially. This is very similar to how Node.js
Valkey uses a *mostly* single threaded design for command execution. This means that a single process
madolson marked this conversation as resolved.
Show resolved Hide resolved
executes all the client commands sequentially, using a technique called **multiplexing**.
While multiple commands can have their I/O operations processed concurrently in the background,
only one command can be executed at any given moment. This is similar to how Node.js
works as well. However, both products are not often perceived as being slow.
This is caused in part by the small amount of time to complete a single request,
This is caused in part by the small amount of time to complete a single command execution,
but primarily because these products are designed to not block on system calls,
such as reading data from or writing data to a socket.

I said that Valkey is *mostly* single threaded since
we use threads in Valkey in order to perform some slow I/O operations in the
background, mainly related to disk I/O, but this does not change the fact
that Valkey serves all the requests using a single thread.

Latency generated by slow commands
----------------------------------
Expand Down