-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Implementer's guide: downward messages and HRMP #1399
Conversation
That also includes a notion of horizontal messages.
15134ca
to
69fe295
Compare
@@ -68,13 +68,18 @@ All failed checks should lead to an unrecoverable error making the block invalid | |||
1. Transform each [`CommittedCandidateReceipt`](../types/candidate.md#committed-candidate-receipt) into the corresponding [`CandidateReceipt`](../types/candidate.md#candidate-receipt), setting the commitments aside. | |||
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup. | |||
1. check that the upward messages, when combined with the existing queue size, are not exceeding `config.max_upward_queue_count` and `config.watermark_upward_queue_size` parameters. | |||
1. call `Router::ensure_processed_downward_messages(para, processed_downward_messages)` to check rules of processing the downward message queue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other points in this section refer to "each candidate" because the parameter is a set of candidates.
processed_downwards_messages
is part of the CandidateCommitments
? it would be more clear to reference that directly because the name is not defined above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other points in this section refer to "each candidate" because the parameter is a set of candidates.
ah good point, fixed.
processed_downwards_messages is part of the CandidateCommitments? it would be more clear to reference that directly because the name is not defined above.
Ah I actually considered to do that but thought it was too verbose. I referred it as e.g. commitments.processed_downward_messages
. Is that what you meant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that is reasonable
1. create an entry in the `PendingAvailability` map for each backed candidate with a blank `availability_votes` bitfield. | ||
1. create a corresponding entry in the `PendingAvailabilityCommitments` with the commitments. | ||
1. Return a `Vec<CoreIndex>` of all scheduled cores of the list of passed assignments that a candidate was successfully backed for, sorted ascending by CoreIndex. | ||
* `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`: | ||
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`. | ||
> TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it. | ||
1. call `Router::queue_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments). | ||
1. call `Router::drain_downward_messages` with the para id of the candidate and `processed_downward_messages` taken from the commitment, | ||
1. call `Router::queue_horizontal_messages` with the para id of the candidate and the list of horizontal messages taken from the commitment, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
likewise: the name here doesn't match what is in the Router
module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
queue_downward_messages
in the guide seems like it can fail if the messages are not of the right type. The enact_candidate
should not be a place where logic can fail, so checks should be done in process_candidates
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
send_downward_messages
(né queue_downward_messages
), as you already noticed, are not used within the enactment process, rather it is used by the relay chain as part of implementation for some extrinsics.
drain_downward_messages
and queue_horizontal_messages
indeed cannot fail relying on the post-conditions provided by the logic invoked within process_candidates
.
TransferInto(AccountId, Balance, Remark), | ||
/// This downward message is a result of a horizontal message represented as opaque bytes sent | ||
/// by the specified sender. | ||
HorizontalMessage(AccountId, Vec<u8>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should it be ParaId
instead of AccountId
? Maybe even not that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, yeah, I ofc meant ParaId
. Changed it to that.
Why do you think it might be not ParaId
? I assumed only other paras can send a horizontal message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I thought we might embed the sender in the message but it seems better to keep the ParaId
in there.
as a result of an operation took place on the relay chain or sent using a horizontal message. | ||
|
||
```rust,ignore | ||
enum DownwardMessage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Opaque
variant is for governance and stuff to send messages directly to parachains. Also, I think @gavofyork plans to use it to offload relay-chain functionality onto relay chains. So then there will be a special protocol between the relay-chain and the parachain that will use these Opaque
messages to communicate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, got it.
Added it below, although I couldn't help myself to choose the name ParachainSpecific
since Opaque
variant is only opaque for third-party paras who don't participate in this custom protocol: both relay-chain and the recipient para know about the contents. But my reasoning is weak, will be happy to revert to Opaque
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ParachainSpecific
is more clear to me.
Generally looks good, just some nits and questions. |
Forgot to check that in.
@@ -1,8 +1,9 @@ | |||
# Router Module | |||
|
|||
The Router module is responsible for storing and dispatching Upward and Downward messages from and to parachains respectively. It is intended to later handle the XCMP logic as well. | |||
The Router module is responsible for all messaging mechanisms supported between paras and the relay chain, specifically: UMP, DMP, HRMP and later XCMP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to add a separate page for describing those in detail and linking this page here. As for now, I added those in the glossary.
Yeah, actually the
We can punt on this for now and add it in a follow-up PR. That will be important when writing the node-side code. |
For HRMP alone, there is indeed no good reason to use watermarks. We might still want them depending on how downwards message passing and XCMP, which definitely needs watermarks, are going to interact in the future. |
Ah, sorry, I should have given more context. I started my journey with with precisely that assumption (and Rob recommended it as well!). However, after digging into the problem for a little, I realized that there is not much sense in leveraging the watermarks for downward message queues. First of all, DMQ seems to be a separate queue/channel even in presence of XCMP. Therefore, that'd require us to have two separate watermarks per each para: one for XCMP and one for DMQ. OTOH, I believe that reconciling these two queues will bring unnecessary complexity. If so, then we notice that for XCMP we have a strong rationale for this particular watermark construction, but for DMQ - not so much we don't particularly need that and free to choose between watermarks or the number of processed message. So, one might ask then: does it make sense to share code between them? Well, I'd say no. Practically, those are similar concepts but not the same.
etc OTOH, having a specialized mechanism for draining the queue IMO is better than stretching watermarks. Especially considering the mechanism presented here is really slim. There might be a chance that I miss some part of a big picture though and don't see subtle interactions between those two. In that case, I'd really appreciate a practical example. |
I think keeping them separate is the way to go. |
@@ -26,8 +28,12 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk | |||
- Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain. | |||
- Secondary Checker: A validator who has been randomly selected to perform secondary approval checks on a parablock which is pending approval. | |||
- Subsystem: A long-running task which is responsible for carrying out a particular category of work. | |||
- UMP: (Upward Message Passing) A vertical message passing mechanism from a parachain to the relay chain. | |||
- DMP: (Downward Message Passing) A vertical message passing mechanism from the relay chain to a parachain. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defined twice
- DMP: (Downward Message Passing) A vertical message passing mechanism from the relay chain to a parachain. |
intended for use by the relay chain. | ||
|
||
`send_downward_messages` is used for enqueuing one or more downward messages for a certain recipient. Since downward | ||
message queues can hold only so many messages per one sender (and the relay chain is not an exception), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should the Relay chain not be an exception? It is the Relay chain which must store these messages, so I don't see why it shouldn't be able to determine when it is acceptable to add some item into the queue. Furthermore by limiting this, then other actions which need to be infallible (such as fund transfer notifications which happen on a scheduled block) may become impossible or result in the need for additional unbounded queues also living on the relay chain. Better, surely, to keep things simple and just allow the relay chain code (which we trust anyway) to add messages into the queue as it sees fit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Funnily, one of the reasons for this limitation was actually simplicity. Since there is, atm unbounded, set of downward message kinds it seemed to me the simplest to just slap a single limit on a downward message queue (DMQ) instead of coming up with individual mechanisms that prevent DoS for each operation that could lead to a downward message (DM) send.
For example, one of such operations, is sending some funds into a parachain account which results in a DM. We of course trust the relay chain, but this particular operation can be executed by a signed extrinsic and therefore can be abused by sending dust transfers. Since HRMP piggybacks on DMQ in this proposal, regular users of the relay-chain would be able to compete with the HRMP messages from other paras. Users would have had an advantage since HRMP is limited by 1 message per candidate. Surely, there should be some sort of mitigation in place to prevent that.
We could go with the ad-hoc mitigations. E.g. for transfer into para there is a simple mitigation: we can coalesce all incoming transfers per block into a single DM. Perhaps we would have to omit the senders for the sake of avoiding inflating the DM's size, but that might limit the usefulness of such message somewhat. For each future operation that might need to send a DM we would need to come up with a mitigation as well, and it might not be that simple.
The alternative is just slap a limit on each DMQ, if there is no space to send a message just bail and call it a day. As you noticed, that would rule out a certain kind of operations, specifically those which cannot check the capacity beforehand. So it is not just simple, it's too simplistic to the degree that's it is also too limiting.
Therefore, it seems that we have to bite the bullet and go with limitless relay-chain and ad-hoc mitigations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, basically for non-essential messages then I think aggregation or economic disincentives will need to be put in place.
HRMP should necessarily be expensive; it's mostly there to give a feel for the (XCMP) product before the product is actually ready. It's not really meant for production usage.
FWIW, this first iteration is on hold for now. |
This PR is expected to be subsumed by #1503 |
Closes #1139
This PR presents an initial take on downward messages queue as well as it introduces a mechanism for Horizontally Relay-routed Message Passing (HRMP).
Each para has an associated downward messages queue. There are two general sources of downward messages: the relay chain and other paras. The relay chain can send a message to a para in response to some operation (E.g. a transfer to an account of a para). Paras can only send messages to other paras via HRMP. There is no limit on the size of the downward message queue, however, all sending paras can occupy only so so many messages as defined per the configuration. If those limits are exceeded by a candidate it is deemed as invalid. There is also a limit on how many messages can be pending that originate from the relay chain. The relay chain supposed to check the limits before executing the operations that might induce sending a message.
Here are a few remarks about the approach presented here:
Opaque
variant of downward message since I am not sure why it is needed.Open Questions: