-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
executor: Support collecting information about retryable deadlocks to information_schema.deadlocks #26140
executor: Support collecting information about retryable deadlocks to information_schema.deadlocks #26140
Conversation
…ct-retryable-deadlock
rec := deadlockhistory.ErrDeadlockToDeadlockRecord(deadlock) | ||
deadlockhistory.GlobalDeadlockHistory.Push(rec) | ||
cfg := config.GetGlobalConfig() | ||
if deadlock.IsRetryable && !cfg.PessimisticTxn.DeadlockHistoryCollectRetryable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a retryable deadlock occurs many times, can it pollute the deadlock histroy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. So I make it not collected by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid it's never turned on because it can't be changed online. For retryable deadlock, it's better to merge the error from the same statements and record its retry count.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I plan to make it support online-changing via the http api. But, you are right, merging repeated error is a good idea. I'll reconsider it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make an record to become latest when updating the retry count of an existing record, It seems we need to change the deadlock history collection (which is a simple queue now) into an LRU. @youjiali1995
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't support merging repeated error for now since it needs too much code change, but I added the HTTP API. PTAL again. @youjiali1995 @longfangsong
…ct-retryable-deadlock
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
/merge |
This pull request has been accepted and is ready to merge. Commit hash: f5444c4
|
@MyonKeminta: Your PR was out of date, I have automatically updated it for you. At the same time I will also trigger all tests for you: /run-all-tests If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
What problem does this PR solve?
Problem Summary: This PR is part of Lock View. It supports collecting retryable (in-statement) deadlocks. It's by default not collected, but it can be enabled by config.
What is changed and how it works?
What's Changed: Supports collecting retryable deadlock errors.
Check List
Tests
Side effects
Documentation
Release note