Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yamux resets stream when both peers are in Client mode #89

Closed
BlackYoup opened this issue Aug 4, 2020 · 3 comments
Closed

Yamux resets stream when both peers are in Client mode #89

BlackYoup opened this issue Aug 4, 2020 · 3 comments

Comments

@BlackYoup
Copy link

Hello everyone,

I'm opening this issue because I have an issue with yamux. I'm using yamux as a wider libp2p-rust project. I'm not having this issue with mplex in my libp2p-rust project. On a side note, I don't have this issue with raw TCP communication either with some PoC.

I have a scenario of three nodes. One node (A) is publicly reachable, two other nodes (B and C) are behind a NAT. To make nodes B and C communicate, I use the TCP Hole Punching NAT traversal solution I implemented myself.

B and C are connected to A and exchange information. At some point, C will need to connect to B. To do that, C sends a message to A to initiate the TCP hole punching. A tells B to start connecting to the public address of C. At the same time, A answers to C with the publicly visible IP of B.

Both nodes create a socket and then bind it to the same address used to contact node A. It then use this socket to connect. B and C start trying to connect to each other, creating a hole in the NAT.

Obviously, B and C try multiple times to connect before they successfully connect to each other. When it doesn't work, a simple Connection Refused message is received but we keep trying.

At some point, B and C are connected at the TCP level. But none of B nor C did accept() a connection. I don't know if that's specific to rust or just how the kernel handle things but since both end try to reach the other, the socket won't be accepted, the system will just make it one TCP connection.

Then yamux enters the game. I guess because no socket is "seen" as the Listener, both are put in Mode::Client. Then, at some point yamux on one of the nodes will throw the error invalid stream id X and end the substream making it impossible to exchange data. The other node will see a connection ending.

I have a working POC of this issue here: https://github.com/BlackYoup/poc-tcp-hole-punching/blob/master/src/raw_yamux.rs . Pardon the code which may be a bit "raw", it took a few hours to set this up and I went the very quick way.

It might not work locally as I couldn't reproduce it. But behind NATed nodes it works as expected. If you don't have the possibility to test and want to try, I think I can manage to give you access to such setup. To run:

  • In src/raw_yamux.rs in the raw_yamux() function, change the SocketAddr with the IP / port of the publicly reachable node A.
  • To start node A: RUST_LOG=trace cargo run --release -- yamux SERVER
  • To start node B / C: RUST_LOG=trace cargo run --release -- yamux CLIENT

As for the logs, I'll only start the logs once both nodes start to connect to each other, that's when the log Init tcp hole punch is printed (don't mind the panic, I used unwrap() everywhere):

Node B:
Init tcp hole punch
 2020-08-04T19:14:10.211Z TRACE mio::poll                 > deregistering handle with poller
We are connected! Try to init yamux :) // We are connected at the TCP level
 2020-08-04T19:14:10.214Z TRACE mio::poll                 > registering with poller
 2020-08-04T19:14:10.214Z DEBUG yamux::connection         > new connection: 4e4b3a1c (Client)
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Init)
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 0)
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 0)
 2020-08-04T19:14:10.214Z TRACE yamux::connection         > 4e4b3a1c: creating new outbound stream
 2020-08-04T19:14:10.214Z TRACE yamux::connection         > 4e4b3a1c: sending initial (Header WindowUpdate 1 (len 262144) (flags 1))
 2020-08-04T19:14:10.214Z DEBUG yamux::connection         > 4e4b3a1c: new outbound (Stream 4e4b3a1c/1) of (Connection 4e4b3a1c Client (streams 0))
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 12)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (Header WindowUpdate 1 (len 262144) (flags 1))
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > 4e4b3a1c: received: (Header WindowUpdate 1 (len 262144) (flags 1))
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > is_valid_remote_id: id=StreamId(1), tag=WindowUpdate, mode=Client
 2020-08-04T19:14:10.215Z ERROR yamux::connection         > 4e4b3a1c: invalid stream id 1
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > 4e4b3a1c: sending term
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Init)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (ReadState::Header 12)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 4e4b3a1c: read: (Header GoAway 0 (len 1) (flags 0))
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > 4e4b3a1c: received: (Header GoAway 0 (len 1) (flags 0))
 2020-08-04T19:14:10.215Z TRACE yamux::connection::stream > 4e4b3a1c/1: update state: (Open Closed Closed)
 2020-08-04T19:14:10.215Z TRACE mio::poll                 > deregistering handle with poller
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Io(Custom { kind: WriteZero, error: "4e4b3a1c/1: connection is closed" })', src/raw_yamux.rs:156:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Node C:
Init tcp hole punch
 2020-08-04T19:14:10.213Z TRACE mio::poll                 > deregistering handle with poller
We are connected! Try to init yamux :)  // We are connected at the TCP level
 2020-08-04T19:14:10.214Z TRACE mio::poll                 > registering with poller
 2020-08-04T19:14:10.214Z DEBUG yamux::connection         > new connection: 7ffc697e (Client)
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Init)
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 0)
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 0)
 2020-08-04T19:14:10.214Z TRACE yamux::connection         > 7ffc697e: creating new outbound stream
 2020-08-04T19:14:10.214Z TRACE yamux::connection         > 7ffc697e: sending initial (Header WindowUpdate 1 (len 262144) (flags 1))
 2020-08-04T19:14:10.214Z DEBUG yamux::connection         > 7ffc697e: new outbound (Stream 7ffc697e/1) of (Connection 7ffc697e Client (streams 0))
 2020-08-04T19:14:10.214Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 12)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (Header WindowUpdate 1 (len 262144) (flags 1))
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > 7ffc697e: received: (Header WindowUpdate 1 (len 262144) (flags 1))
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > is_valid_remote_id: id=StreamId(1), tag=WindowUpdate, mode=Client
 2020-08-04T19:14:10.215Z ERROR yamux::connection         > 7ffc697e: invalid stream id 1
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > 7ffc697e: sending term
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Init)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 0)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (ReadState::Header 12)
 2020-08-04T19:14:10.215Z TRACE yamux::frame::io          > 7ffc697e: read: (Header GoAway 0 (len 1) (flags 0))
 2020-08-04T19:14:10.215Z TRACE yamux::connection         > 7ffc697e: received: (Header GoAway 0 (len 1) (flags 0))
 2020-08-04T19:14:10.215Z TRACE yamux::connection::stream > 7ffc697e/1: update state: (Open Closed Closed)
 2020-08-04T19:14:10.215Z TRACE mio::poll                 > deregistering handle with poller
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Io(Custom { kind: WriteZero, error: "7ffc697e/1: connection is closed" })', src/raw_yamux.rs:156:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

You might also note that before writing / reading, I make tokio::time::delay_for call to show that the stream gets stopped. Without it, some data goes through before the stream is ended which might not show the issue.

Is all this expected behaviour?

Let me know if you need any other details or if I can be of any help here.

Thank you!

@twittner
Copy link
Contributor

twittner commented Aug 5, 2020

Yes, this is expected behaviour. Yamux splits the stream ID space into two non-overlapping sets. The client set consists of odd IDs and the server set of even IDs. See https://github.com/hashicorp/yamux/blob/master/spec.md#streamid-field for details.

A tells B to start connecting to the public address of C. At the same time, A answers to C with the publicly visible IP of B. Both nodes create a socket and then bind it to the same address used to contact node A. It then use this socket to connect. B and C start trying to connect to each other, creating a hole in the NAT.

Even when connections are established by a simultaneous connect, one still needs to make one endpoint the server and the other the client. In general this could be based on the ordering of IP addresses, e.g. the endpoint with the smaller IP address assumes the client role and the other one the server. This is purely needed for the ID space split mentioned above and does not have other implications.

@BlackYoup
Copy link
Author

Thanks for your response.

The client set consists of odd IDs and the server set of even IDs

Yes, that's what I understood of the current code.

Even when connections are established by a simultaneous connect, one still needs to make one endpoint the server and the other the client.

I implemented that logic in my PoC and indeed it works fine now. But in a libp2p project, I don't think I can do that. Maybe I should open an issue there with my use case?

@twittner
Copy link
Contributor

twittner commented Aug 5, 2020

Yes, please file an issue in rust-libp2p. The connection upgrade there is always using Mode::Server for inbound and Mode::Client for outbound upgrades. libp2p-yamux needs to provide a way to override this. Ideally the transport builder also allows access to the connection info.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants