-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry when store is down in high layer #2710
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
/run-all-tests |
/run-all-tests |
1 similar comment
/run-all-tests |
/run-all-tests |
/run-all-tests |
/merge |
This pull request has been accepted and is ready to merge. Commit hash: 33af013
|
In response to a cherrypick label: new pull request created to branch |
What problem does this PR solve?
#2705
What is changed and how it works?
This enhancement will fix this situation: voter down and TiSpark fail
This enhancement will fix this situation: pd leader down and the region get from pd will be null then an exception will be thrown. According to the implementation of client-java, the switch leader needs to get members from pd, when it requests the down pd, it takes
spark.tispark.grpc.timeout_in_sec(180s in default)*2
to retry, which may block the leader switch forspark.tispark.grpc.timeout_in_sec(180s in default)*2
sCheck List
Tests
Code changes
Side effects
Related changes
tidb-ansible
repository