Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testutils/testcluster: TestManualReplication failed #136801

Closed
cockroach-teamcity opened this issue Dec 5, 2024 · 12 comments · Fixed by #139238
Closed

testutils/testcluster: TestManualReplication failed #136801

cockroach-teamcity opened this issue Dec 5, 2024 · 12 comments · Fixed by #139238
Assignees
Labels
A-leader-leases Related to the introduction of leader leases A-testing Testing tools and infrastructure branch-master Failures and bugs on the master branch. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-kv KV Team

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Dec 5, 2024

testutils/testcluster.TestManualReplication failed on master @ a23be6bce928c3e08074a15815b3c67a657bb40e:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication3124605092
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected an external process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication3124605092
--- FAIL: TestManualReplication (10.72s)

Parameters:

  • attempt=1
  • run=23
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

/cc @cockroachdb/test-eng

This test on roachdash | Improve this report!

Jira issue: CRDB-45258

@cockroach-teamcity cockroach-teamcity added branch-master Failures and bugs on the master branch. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-testeng TestEng Team labels Dec 5, 2024
@cockroach-teamcity
Copy link
Member Author

testutils/testcluster.TestManualReplication failed on master @ 49cff91f3501494deaf038671bc643c194a0e3ca:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication2606183705
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected an external process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication2606183705
--- FAIL: TestManualReplication (9.70s)

Parameters:

  • attempt=1
  • run=7
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

Copy link

testutils/testcluster.TestManualReplication failed on master @ 8c44b55e124927e7fce8bb8c0276e43019a9d5ec:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication571555227
    test_log_scope.go:76: use -show-logs to present logs inline
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication571555227
--- FAIL: TestManualReplication (9.22s)

Parameters:

  • attempt=1
  • run=1
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

@cockroach-teamcity
Copy link
Member Author

testutils/testcluster.TestManualReplication failed on master @ f9df57e2bebd963d10ffd7fa52e4d37cf01b80df:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication3843382051
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected a shared process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication3843382051
--- FAIL: TestManualReplication (10.26s)

Parameters:

  • attempt=1
  • run=24
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

@cockroach-teamcity
Copy link
Member Author

testutils/testcluster.TestManualReplication failed on master @ bc6d6e05a7c0f9ffd8103740239fdbc83fa78e3f:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication2238223443
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected a shared process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication2238223443
--- FAIL: TestManualReplication (9.26s)

Parameters:

  • attempt=1
  • run=19
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

Copy link

github-actions bot commented Jan 6, 2025

testutils/testcluster.TestManualReplication failed on master @ 71fa055c4e4206522e371372e4eea9bc965e9d13:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication3222708273
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected a shared process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication3222708273
--- FAIL: TestManualReplication (9.22s)

Parameters:

  • attempt=1
  • run=1
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

Copy link

github-actions bot commented Jan 6, 2025

testutils/testcluster.TestManualReplication failed on master @ b5d57fb09ba9c767a13cec561f77415c40a09d33:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication3526185831
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected a shared process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication3526185831
--- FAIL: TestManualReplication (9.22s)

Parameters:

  • attempt=1
  • run=1
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

@DarrylWong DarrylWong self-assigned this Jan 8, 2025
@DarrylWong
Copy link
Contributor

DarrylWong commented Jan 8, 2025

Can't get this to repro after stressing it 3000 times, despite the somewhat frequent occurrence of the failures.

I see that all these failures take 9+ seconds to run, while locally the test passes in ~2 seconds. In the logs of the failures, I see the test usually hangs for ~5 seconds finally ouputting:

W250104 06:53:34.450722 37694 kv/kvserver/spanlatch/manager.go:650 ⋮ [T1,Vsystem,n1,s1,r77/1:‹/Tenant/1{0/Table/…-1}›,raft] 568  EndTxn(commit change-replicas) [/Local/Range/Tenant/10/Table/106/‹RangeDescriptor›], [txn: 7330a73c] has held latch for 5478884018 ns. Some possible causes are slow disk reads, slow raft replication, and expensive request processing.

The test fails right after that.

Perhaps the test hangs for long enough that the lease to no longer be valid. Still trying to determine what is causing the hanging though.

@srosenberg
Copy link
Member

srosenberg commented Jan 10, 2025

Can't get this to repro after stressing it 3000 times, despite the somewhat frequent occurrence of the failures.

Maybe I'm lucky but I got two repros within seconds of running,

./dev test pkg/testutils/testcluster -f TestManualReplication --stress --count=10000

@srosenberg
Copy link
Member

srosenberg commented Jan 10, 2025

I intersected a bunch of failed runs on the common metamorphic settings; one commonality that is suspect is,

initialized metamorphic constant "kv.raft.leader_fortification.fraction_enabled" with value 1

After disabling it via RaftLeaderFortificationFractionEnabled (or via [1]), I can no longer reproduce. Can you confirm? If that's the case, we'll hand it over to KV for further analysis.

[1] #137080

@DarrylWong
Copy link
Contributor

Maybe I'm lucky

Okay looks like I was just unlucky. Tried it again with --count=10000 and it fails after a few minutes.

Reproing it a couple more times I see the same initialized metamorphic constant "kv.raft.leader_fortification.fraction_enabled" with value 1 pattern you're seeing. Also cannot repro with it disabled.

@srosenberg
Copy link
Member

Handing over to @cockroachdb/kv to see what's up with raft leader fortification. Setting kv.raft.leader_fortification.fraction_enabled to 1 leads to several flakes in a run of 100, e.g.,

./dev test pkg/testutils/testcluster -v -f TestManualReplication --count=100

//pkg/testutils/testcluster:testcluster_test                             FAILED in 5 out of 100 in 10.7s

@srosenberg srosenberg added T-kv KV Team and removed release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-testeng TestEng Team labels Jan 10, 2025
@cockroachdb cockroachdb deleted a comment from blathers-crl bot Jan 10, 2025
@DarrylWong DarrylWong removed their assignment Jan 10, 2025
@arulajmani arulajmani added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. A-testing Testing tools and infrastructure A-leader-leases Related to the introduction of leader leases labels Jan 13, 2025
@iskettaneh iskettaneh self-assigned this Jan 14, 2025
@cockroach-teamcity
Copy link
Member Author

testutils/testcluster.TestManualReplication failed on master @ 0b4d620740733ec61cf50ca26d19814299d91f8e:

=== RUN   TestManualReplication
    test_log_scope.go:165: test logs captured to: outputs.zip/logTestManualReplication737402850
    test_log_scope.go:76: use -show-logs to present logs inline
    test_server_shim.go:152: automatically injected a shared process virtual cluster under test; see comment at top of test_server_shim.go for details.
    testcluster_test.go:156: no valid lease
    panic.go:626: -- test log scope end --
test logs left over in: outputs.zip/logTestManualReplication737402850
--- FAIL: TestManualReplication (10.24s)

Parameters:

  • attempt=1
  • run=22
  • shard=1
Help

See also: How To Investigate a Go Test Failure (internal)

This test on roachdash | Improve this report!

iskettaneh added a commit to iskettaneh/cockroach that referenced this issue Jan 16, 2025
When we transfer a lease from N1 to N2, N2 might not immediately know
about the lease transfer, and might still think that N1 holds the lease.
With leader leases however, since sometimes it takes time until store
liveness grants support, the test might run into timing issues where N2
thinks that N1 has the lease, but the lease is UNUSABLE since now() is
so close to the lease min_expiration time. Also, sometimes N2 can't
determine the lease validity all together.

This commit fixes this by wrapping the lease enquiry after the lease
transfer by a succeeds soon.

Fixes: cockroachdb#136801

Release Note: None
craig bot pushed a commit that referenced this issue Jan 16, 2025
139238: testcluster: deflake TestManualReplication r=iskettaneh a=iskettaneh

When we transfer a lease from N1 to N2, N2 might not immediately know about the lease transfer, and might still think that N1 holds the lease. With leader leases however, since sometimes it takes time until store liveness grants support, the test might run into timing issues where N2 thinks that N1 has the lease, but the lease is UNUSABLE since now() is so close to the lease min_expiration time. Also, sometimes N2 can't determine the lease validity all together.

This commit fixes this by wrapping the lease enquiry after the lease transfer by a succeeds soon.

Couldn't reproduce the bug after more than 10,000 attempts.

Fixes: #136801

Release Note: None

Co-authored-by: Ibrahim Kettaneh <[email protected]>
@craig craig bot closed this as completed in 391a68c Jan 16, 2025
stevendanna pushed a commit to stevendanna/cockroach that referenced this issue Jan 24, 2025
When we transfer a lease from N1 to N2, N2 might not immediately know
about the lease transfer, and might still think that N1 holds the lease.
With leader leases however, since sometimes it takes time until store
liveness grants support, the test might run into timing issues where N2
thinks that N1 has the lease, but the lease is UNUSABLE since now() is
so close to the lease min_expiration time. Also, sometimes N2 can't
determine the lease validity all together.

This commit fixes this by wrapping the lease enquiry after the lease
transfer by a succeeds soon.

Fixes: cockroachdb#136801

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-leader-leases Related to the introduction of leader leases A-testing Testing tools and infrastructure branch-master Failures and bugs on the master branch. C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. T-kv KV Team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants