-
-
Notifications
You must be signed in to change notification settings - Fork 318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skip serializing blocks when persisting to db #3657
Comments
To mitigate this issue we could slow down Backfill sync, such that only N blocks are hashed every X seconds to even out the load. Also keeping the binary blob around to prevent re-serialization is a very nice trick easy to implement now |
should we expose this as a configurable param with a hidden cli arg? |
year I prefer that. Also should we run this in a separate worker thread? |
Actually maybe! I think this is a completely independent process that given the initial initial conditions it does not require communication with the main thread at all. However, is our database thread safe? Can it handle multiple writes from different workers? |
there are other Backfill Sync performance issues mentioned in #3732 (comment) |
should i remove the bytes caching bit from this PR (which we can add once ssz v2 PR is in): #3669, it will save the costs of doing hashTreeRoot for parent/child relationship validation (about 8% CPU as previously profiled by @tuyennhv ). |
Is your feature request related to a problem? Please describe.
A profile from contabo-17 which has low peer count shows that it takes 8% to serialize blocks due to Backfill sync:
Describe the solution you'd like
When fetching blocks from p2p, we already have binary data, we should be able to use that to persist to db without having to serialize() again
The text was updated successfully, but these errors were encountered: