Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add versioning to request response protocols #2362

Merged
merged 28 commits into from
Oct 22, 2024

Conversation

acerone85
Copy link
Contributor

@acerone85 acerone85 commented Oct 15, 2024

Linked Issues/PRs

Related issue: #1311

Context

The P2P service uses the request_response protocol to let peer request information (e.g. block headers, or transactions in the tx pool) from other peers.
Currently, the P2P service provides a protocol /fuel/req_res/0.0.1, which defines responses to have
an Option value. When a request from a peer is unsuccessful, we send a None value as a response, wrapped around the relevant response type.

In practice, we want to be able to send more meaningful information in responses when requests are unsuccessful, in the form of error codes. This requires implementing a new version of the protocol /fuel/req_res/0.0.2 where Response messages are changed to contain error codes.
Because we want to be backward compatible with peers that will only use the version 0.0.1 of the protocol, we must account for requests sent by, and responses sent to, those peers.

Description

  • Rename the ResponseMessage type in the P2P service to LegacyResponseMessage
  • Add a ResponseMessageErrorCode enum, which for now only contains a LegacyNodeEmptyResponse enum.
  • Implement a new ResponseMessage type that returns a Result<_, ResponseMessageErrorCode> wrapped around enum variants
  • Change P2P service to use the neq ResponseMessage type. Responses previously returned as None are now returned as Err<LegacyNodeEmptyResponse>.
  • Change the PostcardCodec to serialize and deserialize according to two different versions of the /fuel/req_res protocol: /fuel/req_res/0.0.1 and /fuel/req_res/0.0.2.
  • Serialization and deserialization of responses using /fuel/req_res/0.0.1 are performed by first converting a ResponseMessage into a LegacyResponseMessage and vicecersa, thus preserving backward-compatibility
  • Change the req_res behaviour used by the P2P service to use both version of the req_res protocol.

TODO (here or in different PRs)

  • Investigate how protocols are exchanged during peer handshakes. We don't want two nodes that both understand /fuel/req_res/0.0.2 to communicate using /fuel/req_res/0.0.1.
  • Add more meaningful error codes.

Checklist

  • Breaking changes are clearly marked as such in the PR description and changelog
  • New behavior is reflected in tests
  • The specification matches the implemented behavior (link update PR if changes are needed)

Before requesting review

  • I have reviewed the code myself
  • I have created follow-up issues caused by this PR and linked them here

After merging, notify other teams

[Add or remove entries as needed]

@acerone85 acerone85 linked an issue Oct 15, 2024 that may be closed by this pull request
@acerone85 acerone85 changed the base branch from release/v0.40.0 to master October 15, 2024 22:28
@acerone85 acerone85 changed the title WIP - Add versioning to request response protocols Add versioning to request response protocols Oct 16, 2024
This reverts commit f3b7db3b0f4aa92f2606c87ed6f65f4798734889.
@acerone85 acerone85 self-assigned this Oct 18, 2024
AurelienFT
AurelienFT previously approved these changes Oct 20, 2024
@@ -32,14 +34,77 @@ pub enum RequestMessage {
TxPoolFullTransactions(Vec<TxId>),
}

// TODO: Do we want explicit status codes or an Error type?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's not a big problem to have more detailed error (with specific block id or thing like that) as soon as we add a limit on the size of the error.

Copy link
Member

@MitchTurner MitchTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WIP. I'm gonna give this another pass soon.

#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ResponseMessage {
pub enum LegacyResponseMessage {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to decide if it would make more sense to name this V1ResponseMessage? The term Legacy will become ambiguous once we add a third protocol."

Same question applies to ResponseMessage.

Or you could name it something descriptive like the protocol consts, e.g. REQUEST_RESPONSE_WITH_ERROR_CODES_PROTOCOL_ID

@@ -160,30 +176,200 @@ impl GossipsubCodec for PostcardCodec {
}
}

// TODO: Remove this NetworkCodec
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you created an issue? And I think the TODO should be on the trait itself, rather than one of the impls.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, issue moved to the trait definition and referenced in b710d8d

Comment on lines 225 to 249
#[tokio::test]
async fn serialzation_roundtrip_using_v2() {
let sealed_block_headers = vec![SealedBlockHeader::default()];
let response = ResponseMessage::SealedHeaders(Ok(sealed_block_headers.clone()));
let mut codec = PostcardCodec::new(1024);
let mut buf = Vec::with_capacity(1024);
codec
.write_response(&PostcardProtocol::V2, &mut buf, response)
.await
.expect("Valid Vec<SealedBlockHeader> should be serialized using v1");

let deserialized = codec
.read_response(&PostcardProtocol::V2, &mut buf.as_slice())
.await
.expect("Valid Vec<SealedBlockHeader> should be deserialized using v1");

match deserialized {
ResponseMessage::SealedHeaders(Ok(sealed_block_headers_response)) => {
assert_eq!(sealed_block_headers, sealed_block_headers_response);
}
other => {
panic!("Deserialized to {other:?}, expected {sealed_block_headers:?}")
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I prefer all UTs include given/when/then blocks and start the name with the SUT.

Suggested change
#[tokio::test]
async fn serialzation_roundtrip_using_v2() {
let sealed_block_headers = vec![SealedBlockHeader::default()];
let response = ResponseMessage::SealedHeaders(Ok(sealed_block_headers.clone()));
let mut codec = PostcardCodec::new(1024);
let mut buf = Vec::with_capacity(1024);
codec
.write_response(&PostcardProtocol::V2, &mut buf, response)
.await
.expect("Valid Vec<SealedBlockHeader> should be serialized using v1");
let deserialized = codec
.read_response(&PostcardProtocol::V2, &mut buf.as_slice())
.await
.expect("Valid Vec<SealedBlockHeader> should be deserialized using v1");
match deserialized {
ResponseMessage::SealedHeaders(Ok(sealed_block_headers_response)) => {
assert_eq!(sealed_block_headers, sealed_block_headers_response);
}
other => {
panic!("Deserialized to {other:?}, expected {sealed_block_headers:?}")
}
}
}
#[tokio::test]
async fn codec__v1_serialzation_roundtrip() {
/// given
let sealed_block_headers = vec![SealedBlockHeader::default()];
let response = ResponseMessage::SealedHeaders(Ok(sealed_block_headers.clone()));
let mut codec = PostcardCodec::new(1024);
let mut buf = Vec::with_capacity(1024);
/// when
codec
.write_response(&PostcardProtocol::V2, &mut buf, response)
.await
.expect("Valid Vec<SealedBlockHeader> should be serialized using v1");
let deserialized = codec
.read_response(&PostcardProtocol::V2, &mut buf.as_slice())
.await
.expect("Valid Vec<SealedBlockHeader> should be deserialized using v1");
/// then
match deserialized {
ResponseMessage::SealedHeaders(Ok(sealed_block_headers_response)) => {
assert_eq!(sealed_block_headers, sealed_block_headers_response);
}
other => {
panic!("Deserialized to {other:?}, expected {sealed_block_headers:?}")
}
}
}

This is a round-trip test, which I consider an exception to the rule that each test should be testing one thing (it's just so succinct)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given/When/Then sections added in cec899c

// We cannot access the codec trait from an old node here,
// so we deserialize directly using the `LegacyResponseMessage` type.
deserialize::<LegacyResponseMessage>(&buf).expect("Deserialization as LegacyResponseMessage should succeed");
match deserialized_as_legacy {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer a matches! macro here (and probably on all the tests) because when I'm reading a test I'm always looking for an explicit expected condition, not an implicit one like here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice :) I have refactored the tests to use matches! in 4280781

Comment on lines 331 to 353
#[tokio::test]
async fn backward_compatibility_v1_read_error_response() {
let response = ResponseMessage::SealedHeaders(Err(
ResponseMessageErrorCode::ProtocolV1EmptyResponse,
));
let mut codec = PostcardCodec::new(1024);
let mut buf = Vec::with_capacity(1024);
codec
.write_response(&PostcardProtocol::V1, &mut buf, response.clone())
.await
.expect("Valid Vec<SealedBlockHeader> is serialized using v1");

let deserialized_as_legacy =
// We cannot access the codec trait from an old node here,
// so we deserialize directly using the `LegacyResponseMessage` type.
deserialize::<LegacyResponseMessage>(&buf).expect("Deserialization as LegacyResponseMessage should succeed");
match deserialized_as_legacy {
LegacyResponseMessage::SealedHeaders(None) => {}
other => {
panic!("Deserialized to {other:?}, expected {response:?}")
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another example of how to structure the tests:

Suggested change
#[tokio::test]
async fn backward_compatibility_v1_read_error_response() {
let response = ResponseMessage::SealedHeaders(Err(
ResponseMessageErrorCode::ProtocolV1EmptyResponse,
));
let mut codec = PostcardCodec::new(1024);
let mut buf = Vec::with_capacity(1024);
codec
.write_response(&PostcardProtocol::V1, &mut buf, response.clone())
.await
.expect("Valid Vec<SealedBlockHeader> is serialized using v1");
let deserialized_as_legacy =
// We cannot access the codec trait from an old node here,
// so we deserialize directly using the `LegacyResponseMessage` type.
deserialize::<LegacyResponseMessage>(&buf).expect("Deserialization as LegacyResponseMessage should succeed");
match deserialized_as_legacy {
LegacyResponseMessage::SealedHeaders(None) => {}
other => {
panic!("Deserialized to {other:?}, expected {response:?}")
}
}
}
#[tokio::test]
async fn codec__write_response_is_backwards_compatible_with_v1() {
// given
let response = ResponseMessage::SealedHeaders(Err(
ResponseMessageErrorCode::ProtocolV1EmptyResponse,
));
let mut codec = PostcardCodec::new(1024);
let mut buf = Vec::with_capacity(1024);
// when
codec
.write_response(&PostcardProtocol::V1, &mut buf, response.clone())
.await
.expect("Valid Vec<SealedBlockHeader> is serialized using v1");
let deserialized_as_legacy =
// We cannot access the codec trait from an old node here,
// so we deserialize directly using the `LegacyResponseMessage` type.
deserialize::<LegacyResponseMessage>(&buf).expect("Deserialization as LegacyResponseMessage should succeed");
// then
match deserialized_as_legacy {
LegacyResponseMessage::SealedHeaders(None) => {}
other => {
panic!("Deserialized to {other:?}, expected {response:?}")
}
}
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed the names of the tests to follow your suggestions in 6276af5

@acerone85 acerone85 requested a review from MitchTurner October 22, 2024 11:24
@acerone85 acerone85 merged commit bb6a42c into master Oct 22, 2024
39 checks passed
@acerone85 acerone85 deleted the 1311-return-better-error-messages-from-p2p-requests branch October 22, 2024 12:46
Comment on lines +191 to +195
pub enum PostcardProtocol {
#[default]
V1,
V2,
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why V1 is default if we prefer V2?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this is a leftover for when I was doing protocol selection manually. In fact, we don't even need a Default version.
I have removed it in the follow-up MR: #2377 (commit ea4842c)

ResponseMessage::Transactions(v) => {
c.send((peer, Ok(v))).is_ok()
V2ResponseMessage::Transactions(v) => {
c.send((peer, Ok(v.ok()))).is_ok()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have the same TODO here

ResponseMessage::TxPoolAllTransactionsIds(v) => {
c.send((peer, Ok(v))).is_ok()
V2ResponseMessage::TxPoolAllTransactionsIds(v) => {
c.send((peer, Ok(v.ok()))).is_ok()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have the same TODO here

ResponseMessage::TxPoolFullTransactions(v) => {
c.send((peer, Ok(v))).is_ok()
V2ResponseMessage::TxPoolFullTransactions(v) => {
c.send((peer, Ok(v.ok()))).is_ok()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have the same TODO here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are addressed in the follow-up MR (#2377), but I agree that in the future I should duplicate the TODO comment in all relevant places

acerone85 added a commit that referenced this pull request Nov 14, 2024
## Linked Issues/PRs
<!-- List of related issues/PRs -->

Closes #1311

## Description
<!-- List of detailed changes -->

This PR follows #2362, which introduces a new version of the protocol
`/fuel/req_response/0.0.2` which allows to return Errors in the
response. This PR makes the following changes:
- [X] the different errors that can be returned in a response are
implemented.
- [X] The type of `ResponseSender` used by a peer to propagate received
responses upstream is changed to allow the rrors to be propagated
- [X] Error Responses received by a peer following a request are
displayed with as log messages with `warning` level.


## Checklist
- [ ] Breaking changes are clearly marked as such in the PR description
and changelog
- [ ] New behavior is reflected in tests
- [ ] [The specification](https://github.com/FuelLabs/fuel-specs/)
matches the implemented behavior (link update PR if changes are needed)

### Before requesting review
- [ ] I have reviewed the code myself
- [ ] I have created follow-up issues caused by this PR and linked them
here

### After merging, notify other teams

[Add or remove entries as needed]

- [ ] [Rust SDK](https://github.com/FuelLabs/fuels-rs/)
- [ ] [Sway compiler](https://github.com/FuelLabs/sway/)
- [ ] [Platform
documentation](https://github.com/FuelLabs/devrel-requests/issues/new?assignees=&labels=new+request&projects=&template=NEW-REQUEST.yml&title=%5BRequest%5D%3A+)
(for out-of-organization contributors, the person merging the PR will do
this)
- [ ] Someone else?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Return better error messages from P2P requests
4 participants