Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache locks #213

Closed
wants to merge 3 commits into from
Closed

Cache locks #213

wants to merge 3 commits into from

Conversation

Wouterdek
Copy link
Contributor

Replaces the current locking mechanism with TwoStageRWLocks in caches. Related to PR #211 and #199.

@anaisbetts
Copy link

Have you benchmarked this in typical use? Usually RW Locks are much slower unless the access pattern is extremely read-skewed

@glennawatson
Copy link
Member

Roland introduced a BenchmarkDotNet project, might be worth adding some benchmarks to that test and see how it performs, taking into account many read/writes.

@RolandPheasant
Copy link
Collaborator

@paulcbetts this is a partial PR which breaks down the larger PR (#199) for adding a preview changes observable to DD. One similar PR has already been accepted for the observable list part of the system.

Before I merged it, I ran some benchmarks to check the performance of the observable list which are been switched to using the two stage reader writer lock. The results showed that there was no significant performance overhead after the update by @Wouterdek

However your comment has made me look again. I have always been wary of reader writer locks and I commented as such on the original PR. I bench marked the simple reads and writes at the source list level without specifically measuring the lock times. This has been rectified now using this snippet. The comparison is between a simple monitor lock and the new locking mechanism and here are the results:

Method N Mean Error StdDev Ratio RatioSD
Monitor 1 25.70 ns 0.5402 ns 0.5547 ns 1.00 0.00
TwoStageReadLock 1 103.34 ns 2.0743 ns 2.0373 ns 4.02 0.11
TwoStageWriteLock 1 47.51 ns 0.6322 ns 0.5913 ns 1.85 0.05
Monitor 10 282.09 ns 2.4393 ns 2.2818 ns 1.00 0.00
TwoStageReadLock 10 1,042.54 ns 16.2385 ns 15.1895 ns 3.70 0.07
TwoStageWriteLock 10 486.89 ns 6.4562 ns 6.0392 ns 1.73 0.03
Monitor 100 2,811.51 ns 18.2344 ns 16.1643 ns 1.00 0.00
TwoStageReadLock 100 10,615.59 ns 77.0986 ns 72.1181 ns 3.77 0.03
TwoStageWriteLock 100 4,828.18 ns 75.1843 ns 66.6489 ns 1.72 0.02

Clearly the new lock mechanism is much slower. The read lock is up to 4 times slower than a simple monitor lock. The fact that significant differences did not show up on my former benchmark can be accounted for by the previous measure including all overheads such as the maintenance of state and the creation of change sets.

@Wouterdek although there are existing larger performance overheads than locking, I feel it would be best if we use a monitor lock only, so could you please update this PR (plus the source list) to use monitor only. I am sorry to mess you about as what you are trying to achieve will be a good contribution to DD and I appreciate your efforts. Also on the original PR you state that the reader writer lock is not essential to the preview operator but believe that better concurrent performance can be gained using the new lock. However from what I can tell, for most consumers of this lib, read / write contention is not at all a bottleneck.

@Wouterdek Wouterdek closed this Mar 30, 2019
@lock lock bot locked and limited conversation to collaborators Jul 9, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants