-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
settings: increase number of cores test can use #138717
settings: increase number of cores test can use #138717
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on our offline discussions we see some other interesting logs too,
I241229 15:00:43.666504 35984 kv/kvserver/liveness/liveness.go:1067 ⋮ [T1,Vsystem,n2,liveness-hb] 478 retrying liveness update after ‹liveness.errRetryLiveness›: result is ambiguous: error=ba: ‹ConditionalPut [/System/NodeLiveness/2], EndTxn(commit modified-span (node-liveness)) [/System/NodeLiveness/2], [txn: 29d7ba85], [can-forward-ts]› RPC error: grpc: ‹node unavailable; try another peer› [code 2/Unknown] [exhausted] (last error: ‹failed to send RPC›: sending to all replicas failed; last error: ba: ‹ConditionalPut [/System/NodeLiveness/2], EndTxn(commit modified-span (node-liveness)) [/System/NodeLiveness/2], [txn: 29d7ba85], [can-forward-ts]› RPC error: grpc: ‹node unavailable; try another peer› [code 2/Unknown])
....
I241229 15:00:43.972075 41534 gossip/client.go:145 ⋮ [T1,Vsystem,n2] 484 closing client to n1 (‹127.0.0.1:34981›): stopping outgoing client to n1 (‹127.0.0.1:34981›); already have incoming
Rather than increasing timeout, maybe we can explore printing ack. Or, as the logs suggests the connection from n2 to n1 is broken. Did this impact the rangefeed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sending it back to your queue to address @shubhamdhama's comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't find anything suspicious from the logs. I spent sometime browsing rangefeed codebase but nothing stood out. Based on the discussion in #db-engineering and #db-server-team, with other DB engineers, I'm increasing the resources to the test for now and see if it fails again. I'm not adding the code to capture the profile just yet.
Reviewable status:
complete! 0 of 0 LGTMs obtained
Clutser setting updates on one node occasionally take over 45 seconds to propagate to other nodes. This test has failed a few times recently. The issue is not reproducible, even under stress testing with 10000 repetitions. Increasing the number of cores the test can use to determine if it resolves the problem. Epic: None Fixes: cockroachdb#133732 Release note: None
87ddcd5
to
f152a7a
Compare
bors r+ |
blathers backport 23.2 24.3 |
Based on the specified backports for this PR, I applied new labels to the following linked issue(s). Please adjust the labels as needed to match the branches actually affected by the issue(s), including adding any known older branches. Issue #133732: branch-release-23.2. 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
Encountered an error creating backports. Some common things that can go wrong:
You might need to create your backport manually using the backport tool. error creating merge commit from f152a7a to blathers/backport-release-23.2-138717: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict [] you may need to manually resolve merge conflicts with the backport tool. Backport to branch 23.2 failed. See errors above. 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
Clutser setting updates on one node occasionally take over 45 seconds to propagate to other nodes. This test has failed a few times recently. The issue is not reproducible, even under stress testing with 10,000 repetitions. Increasing the timeout to 60 seconds to determine if it resolves the problem.
Epic: None
Fixes: #133732
Release note: None