Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

settings: increase number of cores test can use #138717

Merged
merged 1 commit into from
Jan 29, 2025

Conversation

cthumuluru-crdb
Copy link
Contributor

Clutser setting updates on one node occasionally take over 45 seconds to propagate to other nodes. This test has failed a few times recently. The issue is not reproducible, even under stress testing with 10,000 repetitions. Increasing the timeout to 60 seconds to determine if it resolves the problem.

Epic: None
Fixes: #133732
Release note: None

@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Contributor

@shubhamdhama shubhamdhama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on our offline discussions we see some other interesting logs too,

I241229 15:00:43.666504 35984 kv/kvserver/liveness/liveness.go:1067 ⋮ [T1,Vsystem,n2,liveness-hb] 478  retrying liveness update after ‹liveness.errRetryLiveness›: result is ambiguous: error=ba: ‹ConditionalPut [/System/NodeLiveness/2], EndTxn(commit modified-span (node-liveness)) [/System/NodeLiveness/2], [txn: 29d7ba85], [can-forward-ts]› RPC error: grpc: ‹node unavailable; try another peer› [code 2/Unknown] [exhausted] (last error: ‹failed to send RPC›: sending to all replicas failed; last error: ba: ‹ConditionalPut [/System/NodeLiveness/2], EndTxn(commit modified-span (node-liveness)) [/System/NodeLiveness/2], [txn: 29d7ba85], [can-forward-ts]› RPC error: grpc: ‹node unavailable; try another peer› [code 2/Unknown])
....
I241229 15:00:43.972075 41534 gossip/client.go:145 ⋮ [T1,Vsystem,n2] 484  closing client to n1 (‹127.0.0.1:34981›): stopping outgoing client to n1 (‹127.0.0.1:34981›); already have incoming

Rather than increasing timeout, maybe we can explore printing ack. Or, as the logs suggests the connection from n2 to n1 is broken. Did this impact the rangefeed?

Copy link
Collaborator

@rimadeodhar rimadeodhar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending it back to your queue to address @shubhamdhama's comments.

Copy link
Contributor Author

@cthumuluru-crdb cthumuluru-crdb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't find anything suspicious from the logs. I spent sometime browsing rangefeed codebase but nothing stood out. Based on the discussion in #db-engineering and #db-server-team, with other DB engineers, I'm increasing the resources to the test for now and see if it fails again. I'm not adding the code to capture the profile just yet.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained

@cthumuluru-crdb cthumuluru-crdb changed the title settings: increase test timeout for cluster setting updates to propagate settings: increase number of cores test can use Jan 29, 2025
Clutser setting updates on one node occasionally take over 45 seconds
to propagate to other nodes. This test has failed a few times recently.
The issue is not reproducible, even under stress testing with 10000
repetitions. Increasing the number of cores the test can use to
determine if it resolves the problem.

Epic: None
Fixes: cockroachdb#133732
Release note: None
@cthumuluru-crdb
Copy link
Contributor Author

bors r+

@craig craig bot merged commit 05eaf5a into cockroachdb:master Jan 29, 2025
22 checks passed
@cthumuluru-crdb
Copy link
Contributor Author

blathers backport 23.2 24.3

Copy link

blathers-crl bot commented Feb 3, 2025

Based on the specified backports for this PR, I applied new labels to the following linked issue(s). Please adjust the labels as needed to match the branches actually affected by the issue(s), including adding any known older branches.


Issue #133732: branch-release-23.2.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

Copy link

blathers-crl bot commented Feb 3, 2025

Encountered an error creating backports. Some common things that can go wrong:

  1. The backport branch might have already existed.
  2. There was a merge conflict.
  3. The backport branch contained merge commits.

You might need to create your backport manually using the backport tool.


error creating merge commit from f152a7a to blathers/backport-release-23.2-138717: POST https://api.github.com/repos/cockroachdb/cockroach/merges: 409 Merge conflict []

you may need to manually resolve merge conflicts with the backport tool.

Backport to branch 23.2 failed. See errors above.


🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

pkg/settings/integration_tests/integration_tests_test: TestSettingsPersistenceEndToEnd failed
4 participants