-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Storaged go insane after feeded with a ChainAddEdge request of 409600 batch size of edges #3465
Comments
I guess this is because this log is too big that raft can't return in 1 min(default raft rpc timeout). In this case, leader will think its replication to follower failed, then goes into infinite loop. we add some log to check this later. |
tested with raft kvput,and batch size = 4096, we reproduce this problem in nearly 20mins. |
We perform a double check for this problem and found that this only happends when we are inserting edges. |
in this case, TOSS will make a batch, consist of the original request and 409600 locks |
Please check the FAQ documentation before raising an issue
Storage go insane after feeded with a ChainAddEdge request of 409600 batch size of edges. storaged respond every subsequent add edge request with
Code:E_WRITE_WRITE_CONFLICT
, this situation can be reproduced steadily:from the storage log:
We still cannot insert edge after we restart the whole cluster:
store1 is the new leader, and from its log we can see that:
Your Environments (required)
uname -a
g++ --version
orclang++ --version
lscpu
a3ffc7d8
)How To Reproduce(required)
Steps to reproduce the behavior:
Expected behavior
Additional context
The text was updated successfully, but these errors were encountered: