Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question: outbound ip of pods #67

Closed
marc0olo opened this issue Feb 20, 2022 · 12 comments
Closed

question: outbound ip of pods #67

marc0olo opened this issue Feb 20, 2022 · 12 comments

Comments

@marc0olo
Copy link

Hi,

I am really thankful for your contribution, helped me a lot to set up my cluster :-)

I am currently struggling to figure out which outbound ip my pods are using. If possible I'd like to have a static ip. I thought it would be the ip of the loadbalancer (ingress) - but it seems like it isn't.

Would appreciate your help!

Best regards
Marco

@vitobotta
Copy link
Owner

If you need a static IP outside the cluster then yes, the Load Balancer is what you need. How have you set up the deployment and the ingress?

@marc0olo
Copy link
Author

interesting. so it should generally work already? what's the easiest way to check this if I don't have access to the firewall blocking the requests? would really like to see what outbound IP is being used.

for the ingress-controller I used following config:

controller:
  kind: DaemonSet
  service:
    # LIST of all ANNOTATIONS: https://github.com/hetznercloud/hcloud-cloud-controller-manager/blob/master/internal/annotation/load_balancer.go
    annotations:
      # Germany:
      # - nbg1 (Nuremberg)
      # - fsn1 (Falkensteing)
      # Finland:
      # - hel1 (Helsinki)
      # USA:
      # - ash (Ashburn, Virginia)
      load-balancer.hetzner.cloud/location: nbg1
      load-balancer.hetzner.cloud/name: ingress-nginx
      load-balancer.hetzner.cloud/use-private-ip: 'true'
      load-balancer.hetzner.cloud/uses-proxyprotocol: 'true'
      load-balancer.hetzner.cloud/hostname: my-hostname
      load-balancer.hetzner.cloud/http-redirect-https: 'false'

for the ingress of the service where I need a static IP I used following cfg:

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-service
  namespace: prod
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: my-hostname
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-service
            port:
              number: 80
  tls:
  - hosts:
    - my-hostname
    secretName: prod-certificate

for the deployment there is also nothing special

also I have set up hairpin-proxy due to problems with getting certificates via Let's Encrypt.

@marc0olo
Copy link
Author

it may be that I just forgot to expose the service of the deployment with type LoadBalancer. will check and close if this is the case 🤦‍♂️

@vitobotta
Copy link
Owner

You can find the IP either with kubectl -n ingress-nginx get svc | grep LoadBalancer for example with Nginx ingress, or from the Hetzner console. That's the IP you need to configure in the DNS for your workloads. Also you are confusing the hostname you set with the annotation load-balancer.hetzner.cloud/hostname with the hostname of your application. These mean different things even though they could also have the same value. The hostname you set in the annotation is any hostname that points to the load balancer, and is required when you also enable proxy protocol for the load balancer and nginx. It's needed because otherwise for example issuing certificates with Let's Encrypt won't work due to the proxy protocol.

If the ingress is not working as expected and you cannot access the application from its hostname, it's probably because you forgot to enable proxy protocol in the Nginx configmap too. If you use proxy protocol you need to enable it both in the load balancer annotations as you have already done, AND in the Nginx configmap, otherwise you won't be able to access the application correctly.

So you have two options:

  1. Leave the proxy protocol enabled for the load balancer but also enable it in Nginx's configmap
  2. Set the proxy protocol to false in the annotation and reinstall Nginx.

Proxy protocol is only needed if you care to know the actual IP address of your users. If you don't care just set it to false to disable it so you can simplify things.

@marc0olo
Copy link
Author

marc0olo commented Feb 21, 2022

I am not sure if we are talking about the same thing right now. accessing my services from the public using hostnames I defined in the ingress works without any problem.

my problem is that I need a deployment in the cluster to connect to any server in the internet with a static ip. this deployment should access a dedicated server on a specific port which is secured by a firewall and only accepts traffic on this port from my kubernetes deployment (or better said the static IP that it should have)

this is what I am struggling with right now 😞

@marc0olo
Copy link
Author

I am wondering if this is just a config issue on my end. the service I am talking about is not even considered to be exposing anything. it just consumes some remote API. I "just" need to make sure it connects to the remote server with a static IP if possible.

if there is anybody that has (or had) a similar problem and knows how to deal with that => I would appreciate any help 🙌

@vitobotta
Copy link
Owner

Ah, I see. I had indeed misunderstood your issue. In this case you need to whitelist the IPs of the nodes in the firewall, to be able to access the service. Of course you will need to update the firewall if you change the nodes etc.

Another option is to use a proxy. You could create a small cloud instance outside the cluster, install something like TinyProxy and use that as proxy from your pods, so you only need to whitelist the IP of the proxy instance and nothing else. So the IP will be static for as long as you keep the proxy instance.

@marc0olo
Copy link
Author

Ah, I see. I had indeed misunderstood your issue. In this case you need to whitelist the IPs of the nodes in the firewall, to be able to access the service. Of course you will need to update the firewall if you change the nodes etc.

yeah I think this is the quickest solution.

Another option is to use a proxy. You could create a small cloud instance outside the cluster, install something like TinyProxy and use that as proxy from your pods, so you only need to whitelist the IP of the proxy instance and nothing else. So the IP will be static for as long as you keep the proxy instance.

that sounds like a cool idea. I will check this out if I find the time. thank you very much for your time and effort!

@vitobotta
Copy link
Owner

No problem. Closing for now since it's anyway not an issue with the tool :)

@jampy
Copy link

jampy commented Sep 21, 2023

Sorry to step in here and continuing the discussion, but I'm planning my own cluster based on hetzner-k3s and will need a solution for the very same problem.

I'm new to Kubernetes. I've did already some tests with hetzner-k3s and I'm really impressed how easily I've managed to set up a working Kubernetes Cluster with it. Kudos!

Anyway, whitelisting the node IPs in the foreign firewall is not a viable option for me because I want to be flexible with the nodes without needing to think about requesting changes to external firewalls. I also dislike the proxy suggestion (outside the cluster) because I'm aiming at a fully IaaC situation and managing a separate VM outside Kubernetes is something I would like to avoid.

I'm wondering, would it be possible to create a Kubernetes service with a TinyProxy container and add an init container to it that attaches a Floating IP to the Node? I don't know how to make outgoing TinyProxy traffic on that node use the Floating IP, but perhaps you could give me some hint?

@vitobotta
Copy link
Owner

Hi @jampy, what you suggest could work I think. To dynamically attach a floating ip to cluster nodes I've seen https://github.com/costela/hcloud-ip-floater and I wonder if this could help. An alternative is to do something like described in https://metawave.ch/posts/kubernetes-hetzner-ingress/ using MetalLB and a controller to manage floating ips.

@jampy
Copy link

jampy commented Nov 29, 2024

Discussion continued in #494

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants