-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s nodePort Service can't be reached externally #31
Comments
I'm assuming we've not hit this before because the default FORWARD behavior is usually ACCEPT, so the rule @tomdee should flannel be configuring this rule when the default FORWARD behavior is not ACCEPT? |
FWIW, on a default Ubuntu 16.04 with ufw enabled, FORWARD is set to DROP. |
@ravishivt have you done any more testing around this issue? I'm just curious if there have been any updates that resolved the issue. Is this really a flannel issue that needs to be created? I found these 2 issues in the flannel repo:
|
@ravishivt You have a solution that sounds like the solution suggested from the flannel project so I'm going to close this issue. Please feel free to reopen if you think this is not the correct action or open a new issue if you have other issues. Thanks. |
If your software requires |
@upskill-mrollins This repo is for Canal which is specific configuration of flannel and calico, there isn't any Canal software so there isn't really a place where it modifies iptables rules. There is kubernetes/kubernetes#39823, which I submitted a fix for to kube-proxy to address this issue (it has been merged and released). Since kube-proxy is responsible for setting up K8s services (including NodePorts) I thought too that since it needs a FORWARD ACCEPT it should set the rule. What version of kube-proxy is rancher using? (I'm not sure about the first release that included my changes but it has been there for several releases now.) Are you using Canal? If not I'd suggest opening a Calico issue instead of commenting on this Canal one. |
Totally agree, I think this is more of a flannel issue than anything else, and their documentation does point this out. I was new to Canal, and didn't understand the underlying dependencies. It also turned out to be a default configuration by Rancher 2.0.6 which was enabling the pod security features of Canal. They set this to disabled in their latest 2.0.7 release, so my cross namespace communication is working now. Thank you for your response! |
I just wasted over 20 hours in trying to setup my first kubernetes cluster. And I'm still not there. In my eyes we have either a lack of documentation here "user is responsible for setting the rule to ACCEPT" or we have a bug. I tested with BOTH Flannel and Calico, so that means (when reading the above) that we already have at least 3 different network controllers which are not able to handle this where we should open an issue (if not already). I think, at least a tool like kubeadm should warn about it if discovered, in the same way as it warns about bad sysctl settings and missing kernel modules |
Kubernetes, specifically kube-proxy, will add ACCEPT rules to the FORWARD table in order to cover this case provided the I've raised a new issue to track doing something on this: projectcalico/calico#2230 |
I have a 3x node kubernetes cluster: kube-ravi196 10.163.148.196 (master), kube-ravi197 10.163.148.197, and kube-ravi198 10.163.148.198. I have a pod that's currently scheduled on kube-ravi198 that I'd like to be exposed externally to the cluster. So I have a service of type nodePort with the nodePort set to 30080. I can successfully do curl localhost:30080 locally on each node. But externally, curl nodeX:30080 only works against kube-ravi198. The other two timeout.
I debugged iptables and found that the external request is getting dropped in the FORWARD chain as it's hitting the default DROP policy. From my (limited) understanding of Canal, Canal sets up flannel.1 interface on each node and then creates one calico interface for each pod running on a node. It then sets up a felix-FORWARD iptables target in the FORWARD chain to ACCEPT any traffic coming or leaving a calico interface. The problem is that node-to-node traffic goes through the flannel.1 interface and there is nothing to ACCEPT traffic that gets forwarded to it. Doing curl localhost:30080 works because it bypasses the FORWARD table even though its getting DNATed (not sure why).
My fix is to add:
sudo iptables -A FORWARD -o flannel.1 -j ACCEPT
Debug info below:
iptables-save
iptables nat table
iptables filter table
iptables FORWARD chain on ravi-kube196
iptables felix-FORWARD chain on ravi-kube196
ravi-kube196 interfaces (the node I'm testing connectivity to pod externally)
ravi-kube198 interfaces (the node running the target pod)
I originally raised this with kubernetes, kubernetes/kubernetes#39658, but now think this is a canal specific issue.
The text was updated successfully, but these errors were encountered: