Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kind Load Balancer #702

Closed
aojea opened this issue Jul 11, 2019 · 34 comments
Closed

Kind Load Balancer #702

aojea opened this issue Jul 11, 2019 · 34 comments
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@aojea
Copy link
Contributor

aojea commented Jul 11, 2019

What would you like to be added:

A Load Balancer

Why is this needed:

#411 (comment)

@aojea aojea added the kind/feature Categorizes issue or PR as related to a new feature. label Jul 11, 2019
@aojea
Copy link
Contributor Author

aojea commented Jul 11, 2019

It will be interesting to describe better the use cases, cc: @PercyLau

@BenTheElder
Copy link
Member

/kind design

@k8s-ci-robot k8s-ci-robot added the kind/design Categorizes issue or PR as related to design. label Jul 11, 2019
@BenTheElder
Copy link
Member

See previous discussions including #691 (comment)

Docker for Linux you can deploy something like metallb and have fun today. To make something portable we ship by default with kind, you will need to solve the networking problems on docker for windows, Mac etc. And design it such that we can support EG ignite or kata later..

This is in the backlog until someone proposes and proves a workable design.
/priority backlog

@BenTheElder
Copy link
Member

see also though in the meantime: https://mauilion.dev/posts/kind-metallb/

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2019
@tao12345666333
Copy link
Member

@tao12345666333
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 11, 2019
@BenTheElder BenTheElder added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Oct 11, 2019
@Xtigyro
Copy link

Xtigyro commented Oct 23, 2019

@BenTheElder Hey Ben - do you think any ETA for this feature can be set? I wonder whether I can try to help here.

@BenTheElder
Copy link
Member

There is no ETA because it needs a workable design to be agreed upon. So far we don't have one.

@BenTheElder
Copy link
Member

This is another work around https://gist.github.com/alexellis/c29dd9f1e1326618f723970185195963

@aojea
Copy link
Contributor Author

aojea commented Jan 12, 2020

hehe I think this is the simplest and bash scriptable one

# expose the service
kubectl expose deployment hello-world --type=LoadBalancer
# assign an IP to the load balancer
kubectl patch service hello-world -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
# it works now
kubectl get services
NAME              TYPE           CLUSTER-IP         EXTERNAL-IP     PORT(S)          AGE
example-service   NodePort       fd00:10:96::3237   <none>          8080:32677/TCP   13m
hello-world       LoadBalancer   fd00:10:96::98a5   172.31.71.218   8080:32284/TCP   5m47s
kubernetes        ClusterIP      fd00:10:96::1      <none>          443/TCP          22m

Ref: https://stackoverflow.com/a/54168660/7794348

@aojea
Copy link
Contributor Author

aojea commented Jan 12, 2020

wow, still simpler

kubectl expose deployment hello-world --name=testipv4 --type=LoadBalancer --external-ip=6.6.6.6

$kubectl get service
NAME              TYPE           CLUSTER-IP         EXTERNAL-IP     PORT(S)          AGE
example-service   NodePort       fd00:10:96::3237   <none>          8080:32677/TCP   27m
hello-world       LoadBalancer   fd00:10:96::98a5   172.31.71.218   8080:32284/TCP   20m
kubernetes        ClusterIP      fd00:10:96::1      <none>          443/TCP          37m
testipv4          LoadBalancer   fd00:10:96::4236   6.6.6.6         8080:30164/TCP   6s

and using this script to set the ingress IP (see comment #702 (comment))

https://gist.github.com/aojea/94e20cda0f4e4de16fe8e35afc678732

@adampl
Copy link

adampl commented Jan 13, 2020

@aojea That's not a load balancer, external IP can be set regardless of service type. If load balancer controller is active, the ingress entries should appear in the service status field.

@aojea
Copy link
Contributor Author

aojea commented Jan 13, 2020

hi @adampl thanks for clarifying it, let me edit the comment

@tshak
Copy link

tshak commented Feb 10, 2020

For me I'd love a similar solution to minikube tunnel. I test multiple services exposed via an istio's ingress-gateway and use DNS for resolution with fixed ports. The DNS config is automated because after running minikube.tunnel my script grabs the external IP and updates the DNS records.

@BenTheElder
Copy link
Member

BenTheElder commented Feb 10, 2020 via email

@BenTheElder
Copy link
Member

BenTheElder commented Feb 10, 2020 via email

@BenTheElder
Copy link
Member

@aojea and I briefly discussed some prototypes for this, but not ready to move on anything yet.

we link to the metallb guide here https://kind.sigs.k8s.io/docs/user/resources/#how-to-use-kind-with-metalllb

fwiw metallb also runs some CI with kind last I checked, but, linux only still.

@rubensa
Copy link

rubensa commented May 6, 2020

see also though in the meantime: https://mauilion.dev/posts/kind-metallb/

Provided Info is a bit outdated now.

This is how I managed to get it working on latest version:

$ cat << EOF | kind create cluster --image kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
 
# 1 control plane node and 3 workers
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

On first install only

$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.20.255.1-172.20.255.250
EOF

NOTE: 172.20.x.x are not used IPs in the network range created by kind for the cluster (docker network inspect kind)

To check the installation and configuration:

$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: echo
spec:
  replicas: 3
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: inanimate/echo-server
        ports:
        - containerPort: 8080
EOF
$ kubectl expose replicaset echo --type=LoadBalancer
$ kubectl get svc echo
NAME   TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
echo   LoadBalancer   10.109.194.17   172.20.255.1   8080:30256/TCP   151m
$ curl http://172.20.255.1:8080

@Xtigyro
Copy link

Xtigyro commented May 6, 2020

@BenTheElder @rubensa

I've been using this for 6-7 months now and it's been working pretty well for me.
-- https://github.com/Xtigyro/kindadm

@williscool
Copy link

williscool commented Jul 27, 2020

If you are trying to get this working on docker for windows (probably will work for mac to)

very similar to @rubensa 's comment #702 (comment)

except for the address you need

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 127.0.0.240/28
EOF

source: https://medium.com/@JockDaRock/kubernetes-metal-lb-for-docker-for-mac-windows-in-10-minutes-23e22f54d1c8

and then you can expose the service via

kubectl port-forward --address localhost,0.0.0.0 service/echo 8888:8080

may update my fork of @Xtigyro 's repo with the setup once I get it working properly with that

update: did it https://github.com/williscool/deploy-kubernetes-kind

@benmoss
Copy link
Contributor

benmoss commented Nov 5, 2020

adding to what @rubensa posted, this will auto-detect the correct address range for your Kind network:

network=$(docker network inspect kind -f "{{(index .IPAM.Config 0).Subnet}}" | cut -d '.' -f1,2)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - $network.255.1-$network.255.250
EOF

@christopherhein
Copy link
Member

For macOS at least I've found that I can hackily get this to work by using an external docker container that runs socat within the kind network. This is relatively easy to automate by using a controller in-cluster and as long as all the kind nodes that could run the controller have the docker sock mounted, unfortunately, that is fairly insecure, then the controller is able to deploy a new docker image outside of the kind cluster binding to 127.0.0.1 on the macOS host and replicates the NodePort through to the host OS. While not a "real" loadbalancer it suffices for the case so you don't have to run port-forwarding to get access to normally exposed services.

Behind the scenes the controller is really just starting/stopping/updating a docker image that looks something like this:

docker run -d --restart always \
--name kind-kind-proxy-31936 \
--publish 127.0.0.1:31936:31936 \
--link kind-control-plane:target \
--network kind \
alpine/socat -dd \
tcp-listen:31936,fork,reuseaddr tcp-connect:target:31936

You still need to look up the proper ports to route to but it works for both NodePort and for LoadBalancers. The Proof of concept controller I wrote will handle the normal operations like updating the status with the Status.Ingresses[].IP & Status.Ingresses[].Hostname. It's also nice cause it doesn't require anything special out of the kind setup except for the extra volume mounts. EG. Something like this:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock
- role: worker
  extraMounts:
  - hostPath: /var/run/docker.sock
    containerPath: /var/run/docker.sock

I'm wondering if this would be of use to anyone else?

@BenTheElder
Copy link
Member

forgot to mention we have a dedicated metallb guide for linux at least. https://kind.sigs.k8s.io/docs/user/loadbalancer/

I think @aojea and @mauilion had some other ideas to discuss as well?

@christopherhein that sounds pretty useful, I'm not sure if it's the direction we should progress here for the reasons you mentioned but I'm sure some users would be interested anyhow.

@christopherhein
Copy link
Member

Yeah, the insecurity side of things with the docker socket is kind of annoying, it would be easy to reduce the risk by only scheduling on control plane nodes and limiting the docker mount to that kind node but still. hard part for me would be figuring out where I could share this code base :)

@kingdonb
Copy link

kingdonb commented Jun 2, 2022

Does the guide still work now that docker is no longer used under-the-hood in the latest version of kind? Absolutely guessing at what's wrong with my setup but I've used this before, following the docs around load balancers, and it just doesn't seem to work anymore.

(I'm running kind in docker, but I can see that kind is running containerd underneath... if the bridge network belongs to docker on the outside, I don't see how containerd can talk on it from inside, but I'm not a networking expert, the errors I'm getting are "no route to host")

The docker machine itself shows up in my arp tables and responds to pings:

? (172.18.0.2) at 02:42:ac:12:00:02 [ether] on br-d9ef30b68bc8

but the load balancers I created in the same IP range in ARP mode seem to be no-shows:

? (172.18.255.201) at <incomplete> on br-d9ef30b68bc8
$ curl https://172.18.255.201
curl: (7) Failed to connect to 172.18.255.201 port 443: No route to host

I'm happy to try it on a different earlier version although I can't tear this one down right now, I just wondered if anyone already observed an issue with this configuration recently and it just maybe hasn't been logged.

FWIW, I did find this issue logged against metallb:

which suggested disabling IPv6 in order to reach the load balancer IPs, and I am now having success with that method as I write this... (I'm at least able to reach one of my load balancers now, from the host and from the other side of tailnet, and the other one is not being advertised, as far as I can tell that's a problem downstream not related to metallb...)

@BenTheElder
Copy link
Member

Does the guide still work now that docker is no longer used under-the-hood in the latest version of kind?

To clarify:
KIND nodes are docker or podman containers which run containerd inside.

KIND switched to containerd inside of the node containers before this issue was even filed, back in may 2019.

The guide has received many updates and should be working.

The rest of your comment is somewhat ambiguous due to loose terminology around e.g. "in docker" (usually means inside a docker container, but given context I think you just mean docker nodes not podman nodes) or "docker machine" (I think you mean a node container but could be the VM or physical machine in which you're running docker which then has the node containers).

Please file as a new support or bug issue with all the details of your host environment, to keep discussion organized and provide enough details to help diagnose.

@kingdonb
Copy link

kingdonb commented Jun 2, 2022

I don't have an issue at this point and I don't know that this issue needs to remain open either, although I am not sure I read the full history, I came to this issue and reported here because I was having trouble and it was open, so from my perspective it was ambiguous whether the guide should be expected to work.

I'd suggest to close this issue if nobody is actively having any problems that can really be attributed to kind now. Farming out the feature to metallb and covering it with docs on the KinD side seems like all that is needed.

It is documented, and the documentation is super clear. No action needed from my perspective. Otherwise sorry for contributing to the noise floor, if your mind is made up that Kind should support load balancers in a more direct or first-class way, I think the support as it is today is just fine.

@BenTheElder
Copy link
Member

No worries, I just can't tell enough to debug your specific issue yet, and we should have that discussion separately. If you need help with that in the future, please do file one and we'll try to help figure it out.

As far as this issue, the severe limitations on mac and windows are still problematic, most cluster tools provide a working reachable loadbalancer out of the box, it would be great to ship something that handled this more intelligently, we just haven't had the time yet.

Theoretically, one could write a custom controller and tunnel the traffic to the host + the docker network simultaneously with some careful hackery, and consider making it a standard part of kind clusters.

@ReToCode
Copy link

@christopherhein thanks for the hint about the socat container. I was able to run my local setup combining your proposal with metal-lb on kind on macOS: https://github.com/ReToCode/local-kind-setup.

@BenTheElder
Copy link
Member

xref #3086 which is ongoing now. See https://github.com/kubernetes-sigs/cloud-provider-kind for an early implementation.

@aojea
Copy link
Contributor Author

aojea commented May 2, 2024

/close

This is done now

@aojea aojea closed this as completed May 2, 2024
@BenTheElder
Copy link
Member

@aojea we should really add it to the kind docs and move out the existing loadbalancer docs to another page or something.

@aojea
Copy link
Contributor Author

aojea commented May 2, 2024

@aojea we should really add it to the kind docs and move out the existing loadbalancer docs to another page or something.

#3584

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/design Categorizes issue or PR as related to design. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

Successfully merging a pull request may close this issue.