Skip to content

Commit

Permalink
index segments by maximum by 2 workers #4041
Browse files Browse the repository at this point in the history
  • Loading branch information
AskAlexSharov authored May 1, 2022
1 parent 47f4926 commit c896807
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions eth/stagedsync/stage_headers.go
Original file line number Diff line number Diff line change
Expand Up @@ -1102,8 +1102,8 @@ func DownloadAndIndexSnapshotsIfNeed(s *StageState, ctx context.Context, tx kv.R
if workers < 1 {
workers = 1
}
if workers > 4 {
workers = 4
if workers > 2 {
workers = 2 // 4 workers get killed on 16Gb RAM

This comment has been minimized.

Copy link
@eval-exec

eval-exec May 1, 2022

Contributor

I think it would be better if we make this configurable from cli arguments.

This comment has been minimized.

Copy link
@AskAlexSharov

AskAlexSharov May 2, 2022

Author Collaborator

I think need some parameter like —disk=latency and —disk=throughput
First one will minimize parallel reads (HDD-friendly), second will do more them (CloudSSD-friendly).
Problem - our indices creation quite ram-hungry by some reason - need investigate why.
And on top of this - yes - can add some cli flag to tune individual place, something —snap.blabala
In perfect: we must work in single-thread by default (easy to pprof, bottlenecks are obvious, no random-reads on disk, … but able to through 256 cores machine in Cloud if we need fast development iterations.

}
if err := snapshotsync.BuildIndices(ctx, cfg.snapshots, cfg.snapshotDir, *chainID, cfg.tmpdir, cfg.snapshots.IndicesAvailable(), workers, log.LvlInfo); err != nil {
return err
Expand Down

0 comments on commit c896807

Please sign in to comment.