Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

So.. up to now, approximatively。。。。how many is the known state if fully synced? #245

Closed
gongyaodon opened this issue May 24, 2021 · 17 comments

Comments

@gongyaodon
Copy link

No description provided.

@n8twj
Copy link

n8twj commented May 24, 2021

I have a node that has been syncing for a week now:

eth.syncing
{
currentBlock: 7677152,
highestBlock: 7677221,
knownStates: 680899667,
pulledStates: 680825447,
startingBlock: 7627578
}

@gongyaodon
Copy link
Author

I have a node that has been syncing for a week now:

eth.syncing
{
currentBlock: 7677152,
highestBlock: 7677221,
knownStates: 680899667,
pulledStates: 680825447,
startingBlock: 7627578
}

So horrible, we run a node only 44****** states synced by far.
what exactly KNOWN STATES is... so huge, tons of sh*t ???

@skyh24
Copy link

skyh24 commented May 24, 2021

eth.syncing
{
currentBlock: 7679300,
highestBlock: 7679375,
knownStates: 744673926,
pulledStates: 744557060,
startingBlock: 7511148
}

1.1.0 still not finish!

@gongyaodon
Copy link
Author

@guagualvcha , President Gua, Is there any solutions?

@CryptoVader
Copy link

Somebody knows how many knownStates are required to be fully sync ?

@urkishan
Copy link

urkishan commented May 26, 2021

As I am also facing this, as my blocks always showing 100 behind.

{
  currentBlock: 7738573,
  highestBlock: 7738689,
  knownStates: 370745564,
  pulledStates: 370690961,
  startingBlock: 7716912
}

As I need to live BSC asap and it stuck here and it is making me sick.

@zhongfu
Copy link

zhongfu commented Jun 3, 2021

The number of knownStates in eth.syncing is not representative of sync progress if your storage can't keep up (e.g. low iops)

A node that can keep up should finish a fast sync at approximately 300M or so knownStates

If you're seeing a ridiculously high number of knownStates (>400M as a rough ballpark?), then it's quite possible that your node:

  • has poor disk read/write i/o performance,
  • is struggling to download all the state entries fast enough before the pivot shifts, and
  • will likely never finish syncing unless you change something

I had a similar issue when I first attempted to run a BSC node; in my case, it was because I was using zfs. Not sure why -- maybe it's the large record size, or the fact that it's a CoW fs. Switching to xfs allowed all of my nodes to complete a fast sync in less than 10h

If you're on a VPS or some other cloud service, make sure you're able to get enough IOPS. Your provider might be:

  • limiting disk IOPS (e.g. Amazon)
  • oversubscribing their SSDs (e.g. 50 VPSes, all doing io-intensive things, on a single SSD)
  • provisioning crap storage for your VPS/instance

As an anecdote, my last few fast syncs (from scratch) have resulted in state sync completing before all the block headers/bodies were downloaded. If you're seeing this, then your node will probably finish syncing. Otherwise... I guess not all hope is lost, so it's worth letting it continue to run (until it gets stuck a few hundred blocks behind highestBlock for an extended period of time)

@urkishan
Copy link

urkishan commented Jun 4, 2021

Related to #258

@ravinayag
Copy link

My node running on m5zn.2xlarge AWS instance and last few days. I still get the variance around 80 blocks and sometimes jump to 120 blocks
I have separate nvme 1TB gp3 with 16k IOPS & 150Mb/s

eth.syncing
{
currentBlock: 8126606,
highestBlock: 8126726,
knownStates: 756777931,
pulledStates: 756751364,
startingBlock: 8116319
}

eth.syncing.highestBlock - eth.syncing.currentBlock
123

why this average variance? is it an expected result?

@zhongfu
Copy link

zhongfu commented Jun 9, 2021

@ravinayag

knownStates: 756777931,
pulledStates: 756751364,

Seems like your node isn't able to sync state entries fast enough. You should be seeing no more than ~300-350M states by the end of the sync if your node is able to keep up.

What's happening is that:

  • it selects a block height slightly (~100 blocks?) before the latest known block height as the pivot (currentBlock)
  • it downloads all the state trie nodes it can (for the state trie at the current pivot), until the pivot becomes stale (too far away from highestBlock)
  • rinse and repeat until it gets at least all of the state trie nodes for the state trie at the current pivot is downloaded
  • then, switch into full sync mode to download and execute the remaining blocks

Not sure, though -- seems like someone in #258 was able to sync properly with a 1TB gp3 EBS volume. Perhaps it's got something to do with instance configuration, e.g. filesystem?

@ravinayag
Copy link

@zhongfu
Thanks for the detailed info.
will ZFS is recommended FS due to the large number of files created on the node sync. ?

Ref : ext4 vs zfs
considering the factor from the above article, Seq read, no. of files......

Have anybody tried with xfs ..?

$  ls -l node/geth/chaindata | wc -l
232973

$ du -sh 7544969.ldb
2.1M    7544969.ldb

@zhongfu
Copy link

zhongfu commented Jun 9, 2021

@ravinayag I'm using xfs on all of my nodes.

I haven't had a great experience with ZFS -- my guess is that it's got something to do with the recordsize of the dataset that I kept the chaindata in.

I've yet to try ext4, but I reckon it should be alright as well.

$ find chaindata/ | wc -l
75539

$ find chaindata/ -name "*.ldb" | tail -n 1
chaindata/3214003.ldb

@ravinayag
Copy link

I created with fresh sync with new xfs fs., i think it's able to get sync to the nearest blocks(~90) in 8 hours. but my initial question is still intact. because this time i had 30k iops with io2 volume type and xfs fs. and the output for ref. and followed the guide as per bsc full node sync.

$ find chaindata/ -name "*.ldb" | tail -n 1
chaindata/142683.ldb
$ find chaindata/ | wc -l
27836
eth.syncing
{
  currentBlock: 8149261,
  highestBlock: 8149369,
  knownStates: 191332221,
  pulledStates: 191264220,
  startingBlock: 0
}
eth.syncing.highestBlock - eth.syncing.currentBlock
123
eth.syncing.knownStates - eth.syncing.pulledStates
49569

I'm really worried now, what is next......
Do I need to finger cross for the next few hours and see the results or I still have a chance to try with......options to get the full sync node... ?

@zhongfu
Copy link

zhongfu commented Jun 9, 2021

@ravinayag might be worth waiting a bit more -- you should be seeing at least 290k knownStates, I think

@ravinayag
Copy link

@zhongfu , Thank you for your inputs.
Finally, it got sync fully and i got "NaN " from eth.syncing, and this time I see the Instance memory(32g) was fully occupied.

@amitOodles
Copy link

amitOodles commented Jul 27, 2021

@zhongfu, @ravinayag I am syncing my mainnet bsc node on a server with
8Core, 32Gb server with SSD(5000)iops. The syncing is going on from 4 days and this is the latest syncing status

image_2021_07_27T07_57_09_708Z

Also the IOPS details are given in this image

image_2021_07_27T07_58_09_988Z

The IOPS limit is 5000, but the I/O taking place is below the limit and the node is not getting synced. What can be the issue with the I/O usage ?

@unclezoro
Copy link
Collaborator

FYI #104

@keefel keefel closed this as completed Dec 2, 2021
galaio pushed a commit to galaio/bsc that referenced this issue Jul 31, 2024
* Skip genesis state check for transitionned networks

* Update core/genesis.go

Co-authored-by: Sebastian Stammler <[email protected]>

---------

Co-authored-by: Sebastian Stammler <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants