-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added reversed for proof-of-stake syncing #3092
Conversation
0fa980c
to
5ca7263
Compare
4713232
to
3568d3c
Compare
Note: this is just an experimental version, it does not make use of request skeleton and it can roughly download 4000 blocks/second... treat this as being a mere prototype, it was able to sync all the way down from block 13,665,338 in 57 minutes roughly, all subsequent stages are executed correctly. |
Sorry, I introduced some conflicts for your with my simplification PR. Once you resolved, I will review this one again |
eth/stagedsync/stage_headers.go
Outdated
var req headerdownload.HeaderRequest | ||
for !stopped { | ||
sentToPeer := false | ||
for !sentToPeer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, it is this loop that will be mostly tight
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is one of the reason in the forward sync I limited number of iterations to 64, and also inserted select
clause with timer, with ctx.Done()
and also with printing progress. I recommend you do it also here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if i used a 30 seconds timer i would get 6.8 blk/sec, what i could do is sentToPeer==false ?
then i sleep for 30 milliseconds, by doing it i dont implode the CPU.
09a0ab3
to
9af084a
Compare
f99b813
to
2186e3e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's probably enough commenting for now, I think we can start with that. But there is still room for improvements :)
2cdff6f
to
58a21a7
Compare
cmd/sentry/download/downloader.go
Outdated
canRequestMore = canRequestMore || requestMore | ||
if len(penalties) > 0 { | ||
cs.Penalize(ctx, penalties) | ||
if cs.Hd.GetBackwards() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused about this backwards flag. Firstly, my understanding is that the existing (pre-Merge) downloader downloads headers mostly (besides skeleton requests) backwards, towards the genesis. So "backwards" meaning PoS sync is a bit misleading. Secondly, why is it all or nothing mode? Couldn't exclusively backwards downloading of PoS blocks run simultaneously with the old skeleton+backwards algorithm for pre-Merge blocks? To sum up, to my mind "GetBackwards" is a misleading name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed it to POSSync, the flag indicates when we are doing sync for proof-of-stake or not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So does it mean that we can't download PoS and PoW headers simultaneously?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can, if we assume that the block supplied by the beacon chain is correct, we sync backwards until we find a block whose parenthash is present in our database, this can be past the terminal block. if such thing happen, we unwind
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. What about a node that wants to sync from scratch some time after the Merge? It doesn't have any PoW headers in the DB yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, i think that headersForward means that headers are inserted in "forward" order, if you were remove preverified hashes, you would have to consensus check in forward order(genesis to head), however in our case we process the headers in downward order we start from head and go to genesis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you think that headersFoward
is wrong/misleading that can be refactored in another PR, it is not in the scope of this one i believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. I see the following problem with inserting PoS headers into the DB in the downwards direction. Previously we had a single number for header stage's progress: the highest header inserted into the DB. Now there are essentially two numbers: the highest PoW header & the lowest PoS header that are inserted into the DB. So what happens when a node is restarted while there's a gap between the two? It doesn't fit into our current notion of a single number describing stage progress.
I'm wondering why do we have to insert PoS headers into the DB in the downwards direction anyway? To my mind it's simpler to tweak the current algo, have a long PoS segment growing downwards in memory, and then insert it into the DB when it touches the terminal PoW block (or a pre-verified PoS block after the Merge). @AlexeyAkhunov what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to insert them backwards to check their validity, we need to check if previous block parenthash is equal to current block hash and also if we need to unwind
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we can verify parent hashes in memory, before committing to the DB, and grow in memory a long PoS segment from the Beacon chain tip downwards until we touch a PoW block in the DB. That's what the current algo does: it grows segments in memory downwards.
629e335
to
daafe78
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is some git merge/rebase issue in the PR: I see unrelated stuff like ParliaConfig
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed, let's rename HeadersDownward
& HeadersForward
to something like HeadersPoS
& HeadersPoW
.
No description provided.