-
Notifications
You must be signed in to change notification settings - Fork 2.6k
lock import error: Backend error: Can't canonicalize missing block number #2482889 when importing {BLOCK_HASH} (#2486985) #12613
Comments
experienced the same error and a fresh sync doesn't help. |
@pangwa so you can reproduce this every time? |
I can see errors after restart which reports below error.
|
@pangwa could you run with |
Hey, it looks like the node get synced after I performed a resync(deleting all the chain data and resync the node). The logs mentioned above seems to be harmless. |
@bkchr I have the log, do you find any interesting? |
I totally had overseen this, looked into the logs and didn't see anything useful right now. |
Hi. Any updates/estimates on this issue? We believe this issue causes our nodes not being able to sync (Astar). Thanks! |
There is this issue about missing block numbers on forced canonicalization. I looked over the code now 10000 times and there are possible ways this can be triggered, but I don't really know how this is triggered. So, this pr is going to solve the symptom and not the cause. The block number to hash mapping is set when we import a new best block. Forced canonicalization will now stop at the best block and it will canonicalize the other blocks later when the best block moved. As the error reports indicated that this issue mainly happened on major sync, there should not be any forks, so not doing the canonicalization directly shouldn't be that harmful. All known implementations should import all blocks as best block on major sync anyway (I mean somewhere there is the bug, but I didn't yet found it). I will also do some changes to Cumulus around some potential culprit for this issue. Closes: #12613
* Fix missing block number issue on forced canonicalization There is this issue about missing block numbers on forced canonicalization. I looked over the code now 10000 times and there are possible ways this can be triggered, but I don't really know how this is triggered. So, this pr is going to solve the symptom and not the cause. The block number to hash mapping is set when we import a new best block. Forced canonicalization will now stop at the best block and it will canonicalize the other blocks later when the best block moved. As the error reports indicated that this issue mainly happened on major sync, there should not be any forks, so not doing the canonicalization directly shouldn't be that harmful. All known implementations should import all blocks as best block on major sync anyway (I mean somewhere there is the bug, but I didn't yet found it). I will also do some changes to Cumulus around some potential culprit for this issue. Closes: #12613 * Add some docs * Fix fix * Review comments * Review comments
This issue has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/polkadot-release-analysis-v0-9-37/1736/1 |
…#12949) * Fix missing block number issue on forced canonicalization There is this issue about missing block numbers on forced canonicalization. I looked over the code now 10000 times and there are possible ways this can be triggered, but I don't really know how this is triggered. So, this pr is going to solve the symptom and not the cause. The block number to hash mapping is set when we import a new best block. Forced canonicalization will now stop at the best block and it will canonicalize the other blocks later when the best block moved. As the error reports indicated that this issue mainly happened on major sync, there should not be any forks, so not doing the canonicalization directly shouldn't be that harmful. All known implementations should import all blocks as best block on major sync anyway (I mean somewhere there is the bug, but I didn't yet found it). I will also do some changes to Cumulus around some potential culprit for this issue. Closes: paritytech#12613 * Add some docs * Fix fix * Review comments * Review comments
…#12949) * Fix missing block number issue on forced canonicalization There is this issue about missing block numbers on forced canonicalization. I looked over the code now 10000 times and there are possible ways this can be triggered, but I don't really know how this is triggered. So, this pr is going to solve the symptom and not the cause. The block number to hash mapping is set when we import a new best block. Forced canonicalization will now stop at the best block and it will canonicalize the other blocks later when the best block moved. As the error reports indicated that this issue mainly happened on major sync, there should not be any forks, so not doing the canonicalization directly shouldn't be that harmful. All known implementations should import all blocks as best block on major sync anyway (I mean somewhere there is the bug, but I didn't yet found it). I will also do some changes to Cumulus around some potential culprit for this issue. Closes: paritytech#12613 * Add some docs * Fix fix * Review comments * Review comments
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
I submitted to StackExchange
I tried wait few hours and restart the node few times, the node can't recovery from the error.
I also tried delete the DB and do a re-sync, it still occur on another block.
I heard other people has this trouble too, so I think it's worthy to made a bug report
We're running node in
--pruning archive
mode, in my case I'm usingParityDB
Here's a log with
-l db=trace,sync=trace
khala-node.filtered.log
Steps to reproduce
Start a new node and wait maybe a night can trigger
docker run -dti --name khala-node -e NODE_ROLE=MINER -v ~/khala-node:/root/data phalanetwork/khala-node
the command equivalent to
khala-node --pruning archive -- --pruning archive
The text was updated successfully, but these errors were encountered: