-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broadcast external IP from NAT in enode record #10274
Conversation
crates/net/network/src/network.rs
Outdated
@@ -213,7 +215,17 @@ impl PeersInfo for NetworkHandle { | |||
let id = *self.peer_id(); | |||
let mut socket_addr = *self.inner.listener_address.lock(); | |||
|
|||
if socket_addr.ip().is_unspecified() { | |||
let external_ip: Option<IpAddr> = self.inner.nat.and_then(|nat| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can just use self.inner.nat.external_addr()
instead of only supporting the ExternalIp
variant.
crates/net/network/src/config.rs
Outdated
@@ -81,6 +81,8 @@ pub struct NetworkConfig<C> { | |||
pub tx_gossip_disabled: bool, | |||
/// How to instantiate transactions manager. | |||
pub transactions_manager_config: TransactionsManagerConfig, | |||
/// The NAT resolver for external IP | |||
pub nat: Option<NatResolver> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be worth calling this external_ip_resolver
instead of nat
for clarity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a valid issue, adding NatResolver
as a separate discovery service makes sense. Probably Discovery
type is a more appropriate place for it than NetworkInner
reth/crates/net/network/src/discovery.rs
Lines 37 to 69 in b2a035a
/// An abstraction over the configured discovery protocol. | |
/// | |
/// Listens for new discovered nodes and emits events for discovered nodes and their | |
/// address. | |
#[derive(Debug)] | |
pub struct Discovery { | |
/// All nodes discovered via discovery protocol. | |
/// | |
/// These nodes can be ephemeral and are updated via the discovery protocol. | |
discovered_nodes: LruMap<PeerId, PeerAddr>, | |
/// Local ENR of the discovery v4 service (discv5 ENR has same [`PeerId`]). | |
local_enr: NodeRecord, | |
/// Handler to interact with the Discovery v4 service | |
discv4: Option<Discv4>, | |
/// All KAD table updates from the discv4 service. | |
discv4_updates: Option<ReceiverStream<DiscoveryUpdate>>, | |
/// The handle to the spawned discv4 service | |
_discv4_service: Option<JoinHandle<()>>, | |
/// Handler to interact with the Discovery v5 service | |
discv5: Option<Discv5>, | |
/// All KAD table updates from the discv5 service. | |
discv5_updates: Option<ReceiverStream<discv5::Event>>, | |
/// Handler to interact with the DNS discovery service | |
_dns_discovery: Option<DnsDiscoveryHandle>, | |
/// Updates from the DNS discovery service. | |
dns_discovery_updates: Option<ReceiverStream<DnsNodeRecordUpdate>>, | |
/// The handle to the spawned DNS discovery service | |
_dns_disc_service: Option<JoinHandle<()>>, | |
/// Events buffered until polled. | |
queued_events: VecDeque<DiscoveryEvent>, | |
/// List of listeners subscribed to discovery events. | |
discovery_listeners: Vec<mpsc::UnboundedSender<DiscoveryEvent>>, | |
} |
nit: would have been beneficial to open this as an issue first, to get feedback on the solution, before implementing
@emhane Yeah sorry I felt the change was pretty small here so thought opening a PR would be ok too. I can still go open an issue if that's preferred. Let me make a change to add |
@emhane im actually a bit confused what the ideal implementation should be so wanted to get more clarification. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a valid point, initially --natext was only intended for discovery, but I agree there should be an option to set this even if discovery is turned off.
need to check how this flag usually behaves but I assume integrating this on the NetworkManager, need to think about this a bit, because I think we can then also get rid of the optional discv4 handle that we use to access the external or provided ip
moving |
I see, so if I understand correctly, the ideal functionalities would be
Does my understanding make sense? @emhane @mattsse One thing I'm still a bit confused is, what exactly does it mean by |
yes to all above
open an issue for this, but skip it in this PR
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is now very different from what was original proposed in 27da136 and I don't see how this supports setting an external IP now via --nat:extip
can we please roll this back to the approach in 27da136 and use the Natresolver.
we could maybe add this docker resolver as additional variant separately.
13ce5d0
to
7cf37dd
Compare
Hey @emhane @mattsse , I was working on merging the commit with |
@cody-wang-cb can you try out the solution I made and see if it work for your setup? @mattsse there is a test for the docker IP resolution, which passes in CI, so that means it works in docker |
imo the earlier approach makes the most sense here to set a provider --nat:extip as the local node record and this was already very close.
but this isn't quite the same feature as setting a given extip? |
@@ -467,6 +479,12 @@ impl NetworkConfigBuilder { | |||
self.build(Default::default()) | |||
} | |||
|
|||
/// Sets the NAT resolver for external IP. | |||
pub const fn add_nat(mut self, nat: Option<NatResolver>) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added this mainly so that the unit test can be written more conveniently, would love to know if there's a better way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pls better explain the scenario how you will use this? If the docker container can't connect to the Internet anyway, why would you want its ENR to have the host's IP?
@emhane the reason is because we don't want these nodes to be connected to the Internet directly but able to manually peer them with other internal nodes, thus exposing their host ips (which are private ips |
thanks for the explanation @cody-wang-cb . how about using the provided solution with |
@emhane I don't know networking very well but I don't think this really works cross hosts? I think the
cc: @danyalprout Also I think it makes more sense for us to focus on whether exposing an external ip via |
I think this I because you didn't have one container with static ip as boot node. but usually docker assigns IPs sequentially, so 172.17.0.2 for the first one you start, 172.17.0.3 for the next one. so if you start 5 containers, you know their ip addresses will be 172.17.0.2, 172.17.0.3, 172.17.0.4, 172.17.0.5 and 172.17.0.6 in default settings. you can see it by running some simple linux containers in interactive mode and querying for their IP address. run these commands in a few shells at the same time and read the IP address of $ docker run -it --rm alpine /bin/sh
/ # ifconfig |
But this is for a single host? I don't think we want to run multiple reth containers in one single host. |
why do you specifically want to not use the host network driver? because it would be the best suited for your setup |
@emhane In theory we can run it on host mode but that'll require a bunch of other changes like fixing port collisions. We've been running our chain infra with geth in non-host mode since the beginning, I don't think it makes sense for us to change our infra set up when the change in this PR is very straightforward to make it work. |
sure, we are open to make changes to cater for your specific setup :) regarding the implementation, you manually want to pass the host's LAN ip to the node running in docker if I understood it right. pending requested changes from @mattsse |
Yes that's correct. Would be great if yall could help review this, hoping to iterate on this and have it merged EoW if possible :), so that we don't have to continue using my forked repo |
I'd like @mattsse to check it out too, think he has a better idea of what/why we are including |
@mattsse mind taking a look this PR when you have a chance? |
Wanted to bump this PR again, would love to have this be included in the next release so we don't need to use a forked commit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, sorry I overlooked this
Co-authored-by: Matthias Seitz <[email protected]>
Co-authored-by: Matthias Seitz <[email protected]>
In our set up, we are running some
op-reth
nodes in non-discovery mode and running them in non-host docker image, and peering the nodes manually. By default, in non-discovery mode reth uses127.0.0.1
as the enode ip, but we want to set the ip in the enode url using its host ip so that other internal nodes can still connect to it.We tried with
--addr
but it didn't work. Seems like it was because it'd result in the server to listen to the ip directly, which we cannot do since we are running it on a non-host docker image, thus we prefer to just broadcast the host ip rather than directly setting it.In geth,
--nat:extip
has worked for us. However, we tried using--nat:extip
in reth which didn't work. Looking at the code it looks like it's only set in the discv4 but we have discovery mode turned off.Thus proposing a fix to support this scenario by passing
--nat
info into theNetworkHandle
, so that whenlocal_node_record()
is called, it would still try to fetch for external ip if there's any in non-discovery mode, rather than just the local ip for the enode.Also based on suggestions, added functionality such that if
disable-discovery
is on,nat
should also be disabled, and addeddisable-nat
as well.