-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use partitioned lock to optimize disk cache #914
Labels
feature
New feature or request
Comments
please assign to me. |
jiacai2050
pushed a commit
that referenced
this issue
May 24, 2023
## Related Issues Prepare for #914 ## Detailed Changes - Modify the type of `partitions` in `PartitionedMutex` and `PartitionedRwLock`. - Fix the bug that multiple partitions use the same lock. ## Test Plan Unit tests under the same file.
jiacai2050
pushed a commit
that referenced
this issue
May 29, 2023
## Related Issues Ralated #914 ## Detailed Changes Add `build_fixed_seed_ahasher` to build fixed seeds ahasher. Use PartitionedMutex in `MemCache`. ## Test Plan UT.
jiacai2050
pushed a commit
that referenced
this issue
Jun 7, 2023
## Rationale Close #914 ## Detailed Changes Use `partition lock` in `disk cache` ## Test Plan add ut.
dust1
pushed a commit
to dust1/ceresdb
that referenced
this issue
Aug 9, 2023
## Rationale Close apache#914 ## Detailed Changes Use `partition lock` in `disk cache` ## Test Plan add ut.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe This Problem
Partitioned lock is a common trick to reduce lock contention, disk cache could use it to improve perf.
https://github.com/CeresDB/ceresdb/blob/da6899c4c97089d4fbd8aa8a01db98ab8366d5bc/components/object_store/src/disk_cache.rs#L114
Proposal
Use PartitionedRwLock to replace
cache
inDiskCache
https://github.com/CeresDB/ceresdb/blob/da6899c4c97089d4fbd8aa8a01db98ab8366d5bc/common_util/src/partitioned_lock.rs#L17
Additional Context
No response
The text was updated successfully, but these errors were encountered: