Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

guide: collator networking & subsystems #1452

Merged
merged 17 commits into from
Jul 31, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion roadmap/implementers-guide/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
- [Bitfield Signing](node/availability/bitfield-signing.md)
- [Collators](node/collators/README.md)
- [Collation Generation](node/collators/collation-generation.md)
- [Collation Distribution](node/collators/collation-distribution.md)
- [Collator Protocol](node/collators/collator-protocol.md)
- [Validity](node/validity/README.md)
- [Utility Subsystems](node/utility/README.md)
- [Availability Store](node/utility/availability-store.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ After a candidate is backed, the availability of the PoV block must be confirmed

## Protocol

`ProtocolId`:`b"avad"`
`ProtocolId`:`b"avad"`: `PeerSet`: `Validation`

Input:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Validators vote on the availability of a backed candidate by issuing signed bitf

## Protocol

`ProtocolId`: `b"bitd"`
`ProtocolId`: `b"bitd"`: `PeerSet`: `Validation`

Input:
[`BitfieldDistributionMessage`](../../types/overseer-protocol.md#bitfield-distribution-message) which are gossiped to all peers, no matter if validator or not.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This subsystem is responsible for distributing PoV blocks. For now, unified with

## Protocol

`ProtocolId`: `b"povd"`
`ProtocolId`: `b"povd"`, `PeerSet`: `Validation`

Input: [`PoVDistributionMessage`](../../types/overseer-protocol.md#pov-distribution-message)

Expand All @@ -18,7 +18,7 @@ Output:

## Functionality

This network protocol is responsible for distributing [`PoV`s](../../types/availability.md#proof-of-validity) by gossip. Since PoVs are heavy in practice, gossip is far from the most efficient way to distribute them. In the future, this should be replaced by a better network protocol that finds validators who have validated the block and connects to them directly. This protocol is descrbied
This network protocol is responsible for distributing [`PoV`s](../../types/availability.md#proof-of-validity) by gossip. Since PoVs are heavy in practice, gossip is far from the most efficient way to distribute them. In the future, this should be replaced by a better network protocol that finds validators who have validated the block and connects to them directly. This protocol is descrbied.

This protocol is described in terms of "us" and our peers, with the understanding that this is the procedure that any honest node will run. It has the following goals:
- We never have to buffer an unbounded amount of data
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The Statement Distribution Subsystem is responsible for distributing statements

## Protocol

`ProtocolId`: `b"stmd"`
`ProtocolId`: `b"stmd"`, `PeerSet`: `Validation`

Input:

Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,9 +1,36 @@
# Collation Generation

> TODO
The collation generation subsystem is executed on collator nodes and produces candidates to be distributed to validators. If configured to produce collations for a para, it produces collations and then feeds them to the [Collator Protocol][CP] subsystem, which handles the networking.

## Protocol

Input: None

Output: CollationDistributionMessage

## Functionality

## Jobs, if any
The process of generating a collation for a parachain is very parachain-specific. As such, the details of how to do so are left beyond the scope of this description. The subsystem should be implemented as an abstract wrapper, which is aware of this configuration:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are validation/pre-validation functions mentioned anywhere else? If not they could go here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, they would go here. However I've deferred that to a later point.


```rust
struct CollationGenerationConfig {
key: CollatorPair,
collation_producer: Fn(params) -> async (HeadData, Vec<UpwardMessage>, PoV),
}
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you intending that collator nodes extend the polkadot client & recompile? It might be easier to have a local parachain-specific process talk to a polkadot node acting as a collator via some interprocess API, but that's a discussion for much later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you intending that collator nodes extend the polkadot client & recompile

Basically, yes. That's how the current Cumulus architecture works, at least. Some IPC-based split would also be amenable to this approach, with the collation_producer yielding an IPC future.


The configuration should be optional, to allow for the case where the node is not run with the capability to collate.

On `ActiveLeavesUpdate`:
* If there is no collation generation config, ignore.
* Otherwise, for each `activated` head in the update:
* Determine if the para is scheduled or is next up on any occupied core by fetching the `availability_cores` Runtime API.
* Determine an occupied core assumption to make about the para. The simplest thing to do is to always assume that if the para occupies a core, that the candidate will become available. Further on, this might be determined based on bitfields seen or validator requests.
* Use the Runtime API subsystem to fetch the global validation data and local validation data.
* Construct validation function params based on validation data.
* Invoke the `collation_producer`.
* Construct a `CommittedCandidateReceipt` using the outputs of the `collation_producer` and signing with the `key`.
* Dispatch a [`CollatorProtocolMessage`][CPM]`::DistributeCollation(receipt, pov)`.

[CP]: collator-protocol.md
[CPM]: ../../types/overseer-protocol.md#collatorprotocolmessage
133 changes: 133 additions & 0 deletions roadmap/implementers-guide/src/node/collators/collator-protocol.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# Collator Protocol

The Collator Protocol implements the network protocol by which collators and validators communicate. It is used by collators to distribute collations to validators and used by validators to accept collations by collators.

Collator-to-Validator networking is more difficult than Validator-to-Validator networking because the set of possible collators for any given para is unbounded, unlike the validator set. Validator-to-Validator networking protocols can easily be implemented as gossip because the data can be bounded, and validators can authenticate each other by their `PeerId`s for the purposes of instantiating and accepting connections.

Since, at least at the level of the para abstraction, the collator-set for any given para is unbounded, validators need to make sure that they are receiving connections from capable and honest collators and that their bandwidth and time are not being wasted by attackers. Communicating across this trust-boundary is the most difficult part of this subsystem.

Validation of candidates is a heavy task, and furthermore, the [`PoV`][PoV] itself is a large piece of data. Empirically, `PoV`s are on the order of 10MB.

> TODO: note the incremental validation function Ximin proposes at https://github.com/paritytech/polkadot/issues/1348

As this network protocol serves as a bridge between collators and validators, it communicates primarily with one subsystem on behalf of each. As a collator, this will receive messages from the [`CollationGeneration`][CG] subsystem. As a validator, this will communicate with the [`CandidateBacking`][CB] subsystem.

## Protocol

Input: [`CollatorProtocolMessage`][CPM]

Output:
- [`RuntimeApiMessage`][RAM]
- [`NetworkBridgeMessage`][NBM]

## Functionality

This network protocol uses the `Collation` peer-set of the [`NetworkBridge`][NB].

```rust
type RequestId = u64;

enum WireMessage {
/// Declare the intent to advertise collations under a collator ID.
Declare(CollatorId),
/// Advertise a collation to a validator. Can only be sent once the peer has declared
/// that they are a collator with given ID.
AdvertiseCollation(Hash, ParaId),
/// Request the advertised collation at that relay-parent.
RequestCollation(RequestId, Hash, ParaId),
/// A requested collation.
Collation(RequestId, CandidateReceipt, PoV),
Comment on lines +36 to +39
Copy link
Contributor

@tomaka tomaka Jul 31, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should instead be a request-response-style protocol, but I suppose this can be changed later?
As long as collators are honest, it's ok to send collations through notifications.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it can be changed later. As I understand, request/response protocols are not implemented yet.

}
```

Since this protocol functions both for validators and collators, it is easiest to go through the protocol actions for each of them separately.

Validators and collators.
```dot process
digraph {
c1 [shape=MSquare, label="Collator 1"];
c2 [shape=MSquare, label="Collator 2"];

v1 [shape=MSquare, label="Validator 1"];
v2 [shape=MSquare, label="Validator 2"];

c1 -> v1;
c1 -> v2;
c2 -> v2;
}
```

### Collators

It is assumed that collators are only collating on a single parachain. Collations are generated by the [Collation Generation][CG] subsystem. We will keep up to one local collation per relay-parent, based on `DistributeCollation` messages. If the para is not scheduled or next up on any core, at the relay-parent, or the relay-parent isn't in the active-leaves set, we ignore the message as it must be invalid in that case - although this indicates a logic error elsewhere in the node.

We keep track of the Para ID we are collating on as a collator. This starts as `None`, and is updated with each `CollateOn` message received. If the `ParaId` of a collation requested to be distributed does not match the one we expect, we ignore the message.

As with most other subsystems, we track the active leaves set by following `ActiveLeavesUpdate` signals.

For the purposes of actually distributing a collation, we need to be connected to the validators who are interested in collations on that `ParaId` at this point in time. We assume that there is a discovery API for connecting to a set of validators.

> TODO: design & expose the discovery API not just for connecting to such peers but also to determine which of our current peers are validators.

As seen in the [Scheduler Module][SCH] of the runtime, validator groups are fixed for an entire session and their rotations across cores are predictable. Collators will want to do these things when attempting to distribute collations at a given relay-parent:
* Determine which core the para collated-on is assigned to.
* Determine the group on that core and the next group on that core.
* Issue a discovery request for the validators of the current group and the next group with[`NetworkBridgeMessage`][NBM]`::ConnectToValidators`.

Once connected to the relevant peers for the current group assigned to the core (transitively, the para), advertise the collation to any of them which advertise the relay-parent in their view (as provided by the [Network Bridge][NB]). If any respond with a request for the full collation, provide it. Upon receiving a view update from any of these peers which includes a relay-parent for which we have a collation that they will find relevant, advertise the collation to them if we haven't already.

### Validators

On the validator side of the protocol, validators need to accept incoming connections from collators. They should keep some peer slots open for accepting new speculative connections from collators and should disconnect from collators who are not relevant.

```dot process
digraph G {
label = "Declaring, advertising, and providing collations";
labelloc = "t";
rankdir = LR;

subgraph cluster_collator {
rank = min;
label = "Collator";
graph[style = border, rank = min];

c1, c2 [label = ""];
}

subgraph cluster_validator {
rank = same;
label = "Validator";
graph[style = border];

v1, v2 [label = ""];
}

c1 -> v1 [label = "Declare and advertise"];

v1 -> c2 [label = "Request"];

c2 -> v2 [label = "Provide"];

v2 -> v2 [label = "Note Good/Bad"];
}
```

When peers connect to us, they can `Declare` that they represent a collator with given public key. Once they've declared that, they can begin to send advertisements of collations. The peers should not send us any advertisements for collations that are on a relay-parent outside of our view.

The protocol tracks advertisements received and the source of the advertisement. The advertisement source is the `PeerId` of the peer who sent the message. We accept one advertisement per collator per source per relay-parent.

As a validator, we will handle requests from other subsystems to fetch a collation on a specific `ParaId` and relay-parent. These requests are made with the [`CollatorProtocolMessage`][CPM]`::FetchCollation`. To do so, we need to first check if we have already gathered a collation on that `ParaId` and relay-parent. If not, we need to select one of the advertisements and issue a request for it. If we've already issued a request, we shouldn't issue another one until the first has returned.

When acting on an advertisement, we issue a `WireMessage::RequestCollation`. If the request times out, we need to note the collator as being unreliable and reduce its priority relative to other collators. And then make another request - repeat until we get a response or the chain has moved on.

As a validator, once the collation has been fetched some other subsystem will inspect and do deeper validation of the collation. The subsystem will report to this subsystem with a [`CollatorProtocolMessage`][CPM]`::ReportCollator` or `NoteGoodCollation` message. In that case, if we are connected directly to the collator, we apply a cost to the `PeerId` associated with the collator and potentially disconnect or blacklist it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in my other comment on PeerSet, validators should also be connected to a few other validators on the same parachain, and forward received collations onto them. (The details of neighbour selection are mentioned in #1348)

Other validators are more trusted than collators, so the protocol wire message here only needs to consist of the actual Collation, but if it's simpler to re-use the collator-validator wire message for now, it wouldn't hurt. However in the future once we get onto passing around whitelists/blacklists of collators, the message types would have to diverge.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in my other comment on PeerSet, validators should also be connected to a few other validators on the same parachain, and forward received collations onto them

Agreed, but this is handled by the PoV distribution and Statement distribution subsystems, not collation distribution.

[PoV]: ../../types/availability.md#proofofvalidity
[CPM]: ../../types/overseer-protocol.md#collatorprotocolmessage
[CG]: collation-generation.md
[CB]: ../backing/candidate-backing.md
[NB]: ../utility/network-bridge.md
[CBM]: ../../types/overseer-protocol.md#candidatebackingmesage
[RAM]: ../../types/overseer-protocol.md#runtimeapimessage
[NBM]: ../../types/overseer-protocol.md#networkbridgemessage
[SCH]: ../../runtime/scheduler.md
18 changes: 16 additions & 2 deletions roadmap/implementers-guide/src/node/utility/network-bridge.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,24 @@ One other piece of shared state to track is peer reputation. When peers are foun

So in short, this Subsystem acts as a bridge between an actual network component and a subsystem's protocol.

The other component of the network bridge is which peer-set to use. Different peer-sets can be connected for different purposes. The network bridge is not generic over peer-set, but instead exposes two peer-sets that event producers can attach to: `Validation` and `Collation`. More information can be found on the documentation of the [`NetworkBridgeMessage`][NBM].

## Protocol

Input: [`NetworkBridgeMessage`](../../types/overseer-protocol.md#network-bridge-message)
Input: [`NetworkBridgeMessage`][NBM]
Output: Varying, based on registered event producers.

## Functionality

Track a set of all Event Producers, each associated with a 4-byte protocol ID.
Track a set of all Event Producers, each associated with a 4-byte protocol ID and the `PeerSet` it is associated on.

There are two types of network messages this sends and receives:

- ProtocolMessage(ProtocolId, Bytes)
- ViewUpdate(View)

Each of these network messages is associated with a particular peer-set. If we are connected to the same peer on both peer-sets, we will receive two `ViewUpdate`s from them every time they change their view.

`ActiveLeavesUpdate`'s `activated` and `deactivated` lists determine the evolution of our local view over time. A `ViewUpdate` is issued to each connected peer after each update, and a `NetworkBridgeUpdate::OurViewChange` is issued for each registered event producer.

On `RegisterEventProducer`:
Expand All @@ -44,3 +49,12 @@ On `ReportPeer` message:
On `SendMessage` message:

- Issue a corresponding `ProtocolMessage` to each listed peer with given protocol ID and bytes.

[NBM]: ../../types/overseer-protocol.md#network-bridge-message

On `ConnectToValidators` message:

- Determine the DHT keys to use for each validator based on the relay-chain state and Runtime API.
- Recover the Peer IDs of the validators from the DHT. There may be more than one peer ID per validator.
- Accumulate all `(ValidatorId, PeerId)` pairs and send on the response channel.
- Feed all Peer IDs to the discovery utility the underlying network provides.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is meant with discovery utility here? Would the line below work as well?

Suggested change
- Feed all Peer IDs to the discovery utility the underlying network provides.
- Add one `PeerId` per validator as a priority group to the `PeerSet`.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess so, but I talked with Pierre and we wanted to do away with this priority groups thing. The previous line seems more general and can be adapted based on what is actually done.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The guide in general leans away from implementation details of Substrate. I believe priority groups are such a detail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Feed all Peer IDs to the discovery utility the underlying network provides.
- Feed all Peer IDs to the peer set manager the underlying network provides.

In case we want to keep it generic I would suggest the following. I don't think discovery utility is the right term here as the peer has already been discovered at this point.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in #1535

2 changes: 1 addition & 1 deletion roadmap/implementers-guide/src/parachains-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Here is a description of the Inclusion Pipeline: the path a parachain block (or

1. Validators are selected and assigned to parachains by the Validator Assignment routine.
1. A collator produces the parachain block, which is known as a parachain candidate or candidate, along with a PoV for the candidate.
1. The collator forwards the candidate and PoV to validators assigned to the same parachain via the [Collation Distribution subsystem](node/collators/collation-distribution.md).
1. The collator forwards the candidate and PoV to validators assigned to the same parachain via the [Collator Protocol](node/collators/collator-protocol.md).
1. The validators assigned to a parachain at a given point in time participate in the [Candidate Backing subsystem](node/backing/candidate-backing.md) to validate candidates that were put forward for validation. Candidates which gather enough signed validity statements from validators are considered "backable". Their backing is the set of signed validity statements.
1. A relay-chain block author, selected by BABE, can note up to one (1) backable candidate for each parachain to include in the relay-chain block alongside its backing. A backable candidate once included in the relay-chain is considered backed in that fork of the relay-chain.
1. Once backed in the relay-chain, the parachain candidate is considered to be "pending availability". It is not considered to be included as part of the parachain until it is proven available.
Expand Down
46 changes: 43 additions & 3 deletions roadmap/implementers-guide/src/types/overseer-protocol.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,20 +126,60 @@ enum CandidateSelectionMessage {
}
```

## Collator Protocol Message

Messages received by the [Collator Protocol subsystem](../node/collators/collator-protocol.md)

```rust
enum CollatorProtocolMessage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Protocol" suggests that this message is passed to other nodes, and is somewhat stable, but as I understand this is just an internal message between subsystems and could change whenever. How about CollatorInfoMessage or just CollatorInfo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The subsystem is called the Collator Protocol subsystem. Our naming convention for these message types is format!("{}Message", subsystem_name)

/// Signal to the collator protocol that it should connect to validators with the expectation
/// of collating on the given para. This is only expected to be called once, early on, if at all,
/// and only by the Collation Generation subsystem. As such, it will overwrite the value of
/// the previous signal.
///
/// This should be sent before any `DistributeCollation` message.
CollateOn(ParaId),
/// Provide a collation to distribute to validators.
DistributeCollation(CandidateReceipt, PoV),
/// Fetch a collation under the given relay-parent for the given ParaId.
FetchCollation(Hash, ParaId, ResponseChannel<(CandidateReceipt, PoV)>),
/// Report a collator as having provided an invalid collation. This should lead to disconnect
/// and blacklist of the collator.
ReportCollator(CollatorId),
/// Note a collator as having provided a good collation.
NoteGoodCollation(CollatorId),
}
```

## Network Bridge Message

Messages received by the network bridge. This subsystem is invoked by others to manipulate access
to the low-level networking code.

```rust
/// Peer-sets handled by the network bridge.
enum PeerSet {
/// The collation peer-set is used to distribute collations from collators to validators.
Collation,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps I am interpreting "PeerSet" incorrectly, but with collator networking for validators there are actually two types of neighbours:

Then there is "Passing to the relay chain" which is done on the main relay chain gossip protocol that already exists, with its own set of neighbours that (as you note below) can include non-validators, and also validators on another parachain.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For what's specified, we only need collator<>validator communication. The validator<>validator aspects of distributing whitelists etc. are not handled in this version.

Other aspects of parachain networking (distributing PoVs among parachain group, gossiping statements) are beyond the scope of this subsystem and do indeed use the Validation peerset.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this context, PoV blocks are not supposed to be distributed to the Validation peerset; that is the purpose of the A&V protocol. Here they are only supposed to be distributed to other parachain validators, so that they can sign attestations for them, and is less traffic overall.

If we do not have this component, then parachain collators will need to send the same PoV block to multiple parachain validators, in order to achieve the minimum number of attestations needed for the block production protocol. By having parachain validators also pass this between each other, we alleviate this bottleneck. I think this is fairly important to have even in an early version of the protocol.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, you are using both the terms "collation" and "PoV block" separately. Are you saying they are different things? Because I understood them to be the same thing.

Copy link
Contributor Author

@rphmeier rphmeier Jul 31, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here they are only supposed to be distributed to other parachain validators, so that they can sign attestations for them, and is less traffic overall.

Yes, I understood this. And it is covered by another part of the code, as I mentioned. The flow is Collator->Validator. Validator chooses collation to second (Candidate Selection). Signs an attestation (Candidate Backing) and circulates the PoV to other members of the same group (PoV Distribution). Other members of the group validate and sign attestations (Candidate Backing). Signing an attestation also implies keeping the data available. If the candidate is backed, then Availability Distribution is used to distribute the erasure-coded pieces.

Also, you are using both the terms "collation" and "PoV block" separately. Are you saying they are different things? Because I understood them to be the same thing.

Collation is (CandidateReceipt, PoV)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I understood this. And it is covered by another part of the code, as I mentioned.

OK, what you said makes sense although I am still confused by the docstring on "Validation", it says:

This may include nodes which are not validators, as some protocols on this peer-set are expected to be gossip.

This makes sense for the main gossip network on the relay chain, for GRANDPA/BABE etc. However it does not make sense for PoV distribution - here you are only distributing to other parachain validators, not non-validators nor validators on other parachains.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@infinity0 We can't guarantee at this point that we have transitive connection among the validator set. I guarantee that, in practice, if we excluded full nodes we would probably not achieve parachain liveness.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there plans to guarantee these connections? PoV blocks can get quite large, so gossiping these via non-validators will add a lot of latency - and also non-validator full nodes are untrusted, so this allows them to spam validators that are hoping to receive these objects.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, the data are authenticated, because they must be presented alongside a Seconded candidate by a validator. Barring validator equivocations, the amount of data is bounded.

/// The validation peer-set is used to distribute information relevant to parachain
/// validation among validators. This may include nodes which are not validators,
/// as some protocols on this peer-set are expected to be gossip.
Validation,
}

enum NetworkBridgeMessage {
/// Register an event producer with the network bridge. This should be done early and cannot
/// be de-registered.
RegisterEventProducer(ProtocolId, Fn(NetworkBridgeEvent) -> AllMessages),
RegisterEventProducer(PeerSet, ProtocolId, Fn(NetworkBridgeEvent) -> AllMessages),
/// Report a cost or benefit of a peer. Negative values are costs, positive are benefits.
ReportPeer(PeerId, cost_benefit: i32),
ReportPeer(PeerSet, PeerId, cost_benefit: i32),
/// Send a message to one or more peers on the given protocol ID.
SendMessage([PeerId], ProtocolId, Bytes),
SendMessage(PeerSet, [PeerId], ProtocolId, Bytes),
/// Connect to peers who represent the given `ValidatorId`s at the given relay-parent.
///
/// Also accepts a response channel by which the issuer can learn the `PeerId`s of those
/// validators.
ConnectToValidators(PeerSet, [ValidatorId], ResponseChannel<[(ValidatorId, PeerId)]>>),
}
```

Expand Down