-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data race reported by thread sanitizer when using upgradable read guard #306
Comments
https://github.com/NobodyXu/concurrent_arena/blob/80d6ee09d101b5cd9d40d712775a776ba7edca5b/src/bucket.rs#L87 is wrong. There is no lock preventing two threads from setting the Tip: when you get a data race or memory corruption check all |
Thanks for taking the time to read my code! However, I don't think that is the case here, since
|
That |
Could it be that the thread sanitizer implementation in rust is buggy? It seems that the implementation still has some bugs and sometimes the linking just fails. |
I tried running this locally (x86_64 linux) but couldn't reproduce the issue. In the past there have been some issues with RwLock upgrade but this should have been fixed by c866ba8. |
That's strange since the github workflow can reproduce this data race error reliably. It is using |
Here is some additional information on my local x86_64 linux:
|
@Amanieu I think that this might be a false positive, since thread sanitizer might actually catch an UB in the allocator. In the other data race I found in So I will close this issue for now since I have no solid proof that |
Hi
I've been working on my own project NobodyXu/concurrent_arena and I used the thread sanitizer of the unstable rust.
It reported that my use upgradable reader to access the
Vec
protected byRwLock
has data race, and I have no idea how to fix this.Here is the link to the code
Arena::try_reserve
that the data race occured in.And here is the log of thread sanitizer error in the CI.
This issue is also reported in NobodyXu/concurrent_arena#1.
The text was updated successfully, but these errors were encountered: