-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data race on stream.memory access #95
Comments
Do you have a test case that reproduces this? Our CI runs our tests with race detector, and it hasn't occurred there. |
It is currently being triggered by some of our integration test. It won't be trivial to make a reproducible test given that the code that accesses Lines 222 to 240 in a39b231
|
Just had a closer look at the code. I'd assume the simplest fix is to actually hold the lock when calling Do you want to submit a PR? |
I can try! Holding a session lock when growing stream's the memory would be sufficient. Though locking patten there is a bit weird (child object grabbing parent's lock) and also Reservation/Release pattern is asymmetric: reservation is done by the stream, while release is done by the session. |
Maybe one could split the sendWindowUpdate into a getWindowUpdate function, and have the caller send it out? |
We have a couple of test failures on a
-race
build due to yamux racing onstream.memory
. Looking at the codesendWindowUpdate
does mention that it should be called under a lock but since it is called fromRead
I'm not sure any of the existing users (incl. swarm) synchronize Read/Close with a separate lock.What do you think is the best approach for solving this race?
The text was updated successfully, but these errors were encountered: