You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Isolate network to protect main thread of DOS attacks
Allow to throttle network processor better to keep main thread event loop lag under control
Also, for nodes subscribed to many subnets recent benchmarks show that network handling takes decent chunk CPU time, lagging the main thread and delaying other pipelines like block processing.
Simple and lame chart for general idea. libp2p instance is rooted on the TCP transport's socket. It pulls many components that are best together in the same thread:
peer manager
gossip
reqresp
On the other hand, the main thread is the only thread with access to the state cache, so the network processor must remain there. For now we'll place the network queues in the main thread, but those could be placed in the worker.
Interfaces
To do A/B testing we want to have a flag to switch the libp2p instance from being in a worker thread or the main thread. So we need
A) Main thread "public" facing network API with variable backend. Implements NetworkPublic and calls NetworkCore internally (either B or C).
B) Internal network class with logic de-duplicated from either backend. Implements NetworkCore with actual logic
C) Main to worker event based interface to wire A to B through worker. Implements NetworkCore and calls NetworkCore internally (B).
interfaceNetworkPublicextendsNetworkCorePublic{// Functions using multiple reqresp / gossip methodspublishBeaconBlockMaybeBlobs(signedBlock: BlockInput): Promise<void>;beaconBlocksMaybeBlobsByRange(peerId: PeerId,request: phase0.BeaconBlocksByRangeRequest): Promise<BlockInput[]>;beaconBlocksMaybeBlobsByRoot(peerId: PeerId,request: phase0.BeaconBlocksByRootRequest): Promise<BlockInput[]>;// ReqResp caller helpers// NOTE: Should map to a single fn to prevent boilerplatestatus(peerId: PeerId,request: phase0.Status): Promise<phase0.Status>;goodbye(peerId: PeerId,request: phase0.Goodbye): Promise<void>;ping(peerId: PeerId): Promise<phase0.Ping>;metadata(peerId: PeerId): Promise<allForks.Metadata>;beaconBlocksByRange(peerId: PeerId,request: phase0.BeaconBlocksByRangeRequest): Promise<allForks.SignedBeaconBlock[]>;beaconBlocksByRoot(peerId: PeerId,request: phase0.BeaconBlocksByRootRequest): Promise<allForks.SignedBeaconBlock[]>;blobsSidecarsByRange(peerId: PeerId,request: deneb.BlobsSidecarsByRangeRequest): Promise<deneb.BlobsSidecar[]>;beaconBlockAndBlobsSidecarByRoot(peerId: PeerId,request: deneb.BeaconBlockAndBlobsSidecarByRootRequest): Promise<deneb.SignedBeaconBlockAndBlobsSidecar[]>;lightClientBootstrap(peerId: PeerId,request: Uint8Array): Promise<allForks.LightClientBootstrap>;lightClientOptimisticUpdate(peerId: PeerId): Promise<allForks.LightClientOptimisticUpdate>;lightClientFinalityUpdate(peerId: PeerId): Promise<allForks.LightClientFinalityUpdate>;lightClientUpdatesByRange(peerId: PeerId,request: altair.LightClientUpdatesByRange): Promise<allForks.LightClientUpdate[]>;// Gossip publish helpers// NOTE: This functions could be stand-alone since each have usually one single callerpublishBeaconBlock(signedBlock: allForks.SignedBeaconBlock): Promise<PublishResult>publishSignedBeaconBlockAndBlobsSidecar(item: deneb.SignedBeaconBlockAndBlobsSidecar): Promise<PublishResult>publishBeaconAggregateAndProof(aggregateAndProof: phase0.SignedAggregateAndProof): Promise<PublishResult>publishBeaconAttestation(attestation: phase0.Attestation,subnet: number): Promise<PublishResult>publishVoluntaryExit(voluntaryExit: phase0.SignedVoluntaryExit): Promise<PublishResult>publishBlsToExecutionChange(blsToExecutionChange: capella.SignedBLSToExecutionChange): Promise<PublishResult>publishProposerSlashing(proposerSlashing: phase0.ProposerSlashing): Promise<PublishResult>publishAttesterSlashing(attesterSlashing: phase0.AttesterSlashing): Promise<PublishResult>publishSyncCommitteeSignature(signature: altair.SyncCommitteeMessage,subnet: number): Promise<PublishResult>publishContributionAndProof(contributionAndProof: altair.SignedContributionAndProof): Promise<PublishResult>publishLightClientFinalityUpdate(lightClientFinalityUpdate: allForks.LightClientFinalityUpdate): Promise<PublishResult>publishLightClientOptimisticUpdate(lightClientOptimisitcUpdate: allForks.LightClientOptimisticUpdate): Promise<PublishResult>}
Rationale
Main goals:
Also, for nodes subscribed to many subnets recent benchmarks show that network handling takes decent chunk CPU time, lagging the main thread and delaying other pipelines like block processing.
Architecture
Simple and lame chart for general idea. libp2p instance is rooted on the TCP transport's socket. It pulls many components that are best together in the same thread:
On the other hand, the main thread is the only thread with access to the state cache, so the network processor must remain there. For now we'll place the network queues in the main thread, but those could be placed in the worker.
Interfaces
To do A/B testing we want to have a flag to switch the libp2p instance from being in a worker thread or the main thread. So we need
NetworkPublic
and callsNetworkCore
internally (either B or C).NetworkCore
with actual logicNetworkCore
and callsNetworkCore
internally (B).The text was updated successfully, but these errors were encountered: