-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
So.. up to now, approximatively。。。。how many is the known state if fully synced? #245
Comments
I have a node that has been syncing for a week now:
|
So horrible, we run a node only 44****** states synced by far. |
eth.syncing 1.1.0 still not finish! |
@guagualvcha , President Gua, Is there any solutions? |
Somebody knows how many knownStates are required to be fully sync ? |
As I am also facing this, as my blocks always showing 100 behind.
As I need to live BSC asap and it stuck here and it is making me sick. |
The number of A node that can keep up should finish a fast sync at approximately 300M or so If you're seeing a ridiculously high number of
I had a similar issue when I first attempted to run a BSC node; in my case, it was because I was using zfs. Not sure why -- maybe it's the large record size, or the fact that it's a CoW fs. Switching to xfs allowed all of my nodes to complete a fast sync in less than 10h If you're on a VPS or some other cloud service, make sure you're able to get enough IOPS. Your provider might be:
As an anecdote, my last few fast syncs (from scratch) have resulted in state sync completing before all the block headers/bodies were downloaded. If you're seeing this, then your node will probably finish syncing. Otherwise... I guess not all hope is lost, so it's worth letting it continue to run (until it gets stuck a few hundred blocks behind |
Related to #258 |
My node running on m5zn.2xlarge AWS instance and last few days. I still get the variance around 80 blocks and sometimes jump to 120 blocks
why this average variance? is it an expected result? |
Seems like your node isn't able to sync state entries fast enough. You should be seeing no more than ~300-350M states by the end of the sync if your node is able to keep up. What's happening is that:
Not sure, though -- seems like someone in #258 was able to sync properly with a 1TB gp3 EBS volume. Perhaps it's got something to do with instance configuration, e.g. filesystem? |
@zhongfu Ref : ext4 vs zfs Have anybody tried with xfs ..?
|
@ravinayag I'm using xfs on all of my nodes. I haven't had a great experience with ZFS -- my guess is that it's got something to do with the I've yet to try ext4, but I reckon it should be alright as well.
|
I created with fresh sync with new xfs fs., i think it's able to get sync to the nearest blocks(~90) in 8 hours. but my initial question is still intact. because this time i had 30k iops with io2 volume type and xfs fs. and the output for ref. and followed the guide as per bsc full node sync.
I'm really worried now, what is next...... |
@ravinayag might be worth waiting a bit more -- you should be seeing at least 290k knownStates, I think |
@zhongfu , Thank you for your inputs. |
@zhongfu, @ravinayag I am syncing my mainnet bsc node on a server with Also the IOPS details are given in this image The IOPS limit is 5000, but the I/O taking place is below the limit and the node is not getting synced. What can be the issue with the I/O usage ? |
FYI #104 |
* Skip genesis state check for transitionned networks * Update core/genesis.go Co-authored-by: Sebastian Stammler <[email protected]> --------- Co-authored-by: Sebastian Stammler <[email protected]>
No description provided.
The text was updated successfully, but these errors were encountered: