-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kind Load Balancer #702
Comments
It will be interesting to describe better the use cases, cc: @PercyLau |
/kind design |
See previous discussions including #691 (comment) Docker for Linux you can deploy something like metallb and have fun today. To make something portable we ship by default with kind, you will need to solve the networking problems on docker for windows, Mac etc. And design it such that we can support EG ignite or kata later.. This is in the backlog until someone proposes and proves a workable design. |
see also though in the meantime: https://mauilion.dev/posts/kind-metallb/ |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Another interesting project https://github.com/alexellis/inlets-operator#video-demo It has some examples with kind. https://github.com/alexellis/inlets-operator#run-the-go-binary-with-packetcom |
/remove-lifecycle stale |
@BenTheElder Hey Ben - do you think any ETA for this feature can be set? I wonder whether I can try to help here. |
There is no ETA because it needs a workable design to be agreed upon. So far we don't have one. |
This is another work around https://gist.github.com/alexellis/c29dd9f1e1326618f723970185195963 |
hehe I think this is the simplest and bash scriptable one # expose the service
kubectl expose deployment hello-world --type=LoadBalancer
# assign an IP to the load balancer
kubectl patch service hello-world -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
# it works now
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort fd00:10:96::3237 <none> 8080:32677/TCP 13m
hello-world LoadBalancer fd00:10:96::98a5 172.31.71.218 8080:32284/TCP 5m47s
kubernetes ClusterIP fd00:10:96::1 <none> 443/TCP 22m |
wow, still simpler
and using this script to set the ingress IP (see comment #702 (comment)) https://gist.github.com/aojea/94e20cda0f4e4de16fe8e35afc678732 |
@aojea That's not a load balancer, external IP can be set regardless of service type. If load balancer controller is active, the ingress entries should appear in the service status field. |
hi @adampl thanks for clarifying it, let me edit the comment |
For me I'd love a similar solution to |
First you have to solve the docker for Mac / Linux issue that the VM in
which containers run has no IP and containers have no reachable IPs from
the host (only port forward).
At this time I'm not aware of a clean solution.
On Linux you don't need much, the node containers are routable out of the
box, if you want you can add a route to the service cidr via a node.
…On Mon, Feb 10, 2020, 06:39 tshak ***@***.***> wrote:
For me I'd love a similar solution to minikube tunnel. I test multiple
services exposed via an istio's ingress-gateway and use DNS for resolution
with fixed ports. The DNS config is automated because after running
minikube.tunnel my script grabs the external IP and updates the DNS
records.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#702?email_source=notifications&email_token=AAHADK5MG5GJFP4TNQNTAQDRCFRL7A5CNFSM4IBFVRJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELIXWRA#issuecomment-584153924>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAHADK57A6EAMK7CBTNSL73RCFRL7ANCNFSM4IBFVRJA>
.
|
Er docker for Mac / Windows*
We don't have any control over that, and it seriously limits our options
…On Mon, Feb 10, 2020, 08:24 Benjamin Elder ***@***.***> wrote:
First you have to solve the docker for Mac / Linux issue that the VM in
which containers run has no IP and containers have no reachable IPs from
the host (only port forward).
At this time I'm not aware of a clean solution.
On Linux you don't need much, the node containers are routable out of the
box, if you want you can add a route to the service cidr via a node.
On Mon, Feb 10, 2020, 06:39 tshak ***@***.***> wrote:
> For me I'd love a similar solution to minikube tunnel. I test multiple
> services exposed via an istio's ingress-gateway and use DNS for resolution
> with fixed ports. The DNS config is automated because after running
> minikube.tunnel my script grabs the external IP and updates the DNS
> records.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#702?email_source=notifications&email_token=AAHADK5MG5GJFP4TNQNTAQDRCFRL7A5CNFSM4IBFVRJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOELIXWRA#issuecomment-584153924>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAHADK57A6EAMK7CBTNSL73RCFRL7ANCNFSM4IBFVRJA>
> .
>
|
@aojea and I briefly discussed some prototypes for this, but not ready to move on anything yet. we link to the metallb guide here https://kind.sigs.k8s.io/docs/user/resources/#how-to-use-kind-with-metalllb fwiw metallb also runs some CI with kind last I checked, but, linux only still. |
Provided Info is a bit outdated now. This is how I managed to get it working on latest version: $ cat << EOF | kind create cluster --image kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# 1 control plane node and 3 workers
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml On first install only $ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" $ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.1-172.20.255.250
EOF NOTE: 172.20.x.x are not used IPs in the network range created by kind for the cluster (docker network inspect kind) To check the installation and configuration: $ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: echo
spec:
replicas: 3
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: inanimate/echo-server
ports:
- containerPort: 8080
EOF $ kubectl expose replicaset echo --type=LoadBalancer $ kubectl get svc echo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo LoadBalancer 10.109.194.17 172.20.255.1 8080:30256/TCP 151m $ curl http://172.20.255.1:8080 |
I've been using this for 6-7 months now and it's been working pretty well for me. |
If you are trying to get this working on docker for windows (probably will work for mac to) very similar to @rubensa 's comment #702 (comment) except for the address you need
and then you can expose the service via
may update my fork of @Xtigyro 's repo with the setup once I get it working properly with that update: did it https://github.com/williscool/deploy-kubernetes-kind |
adding to what @rubensa posted, this will auto-detect the correct address range for your Kind network: network=$(docker network inspect kind -f "{{(index .IPAM.Config 0).Subnet}}" | cut -d '.' -f1,2)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- $network.255.1-$network.255.250
EOF |
For macOS at least I've found that I can hackily get this to work by using an external docker container that runs Behind the scenes the controller is really just starting/stopping/updating a docker image that looks something like this: docker run -d --restart always \
--name kind-kind-proxy-31936 \
--publish 127.0.0.1:31936:31936 \
--link kind-control-plane:target \
--network kind \
alpine/socat -dd \
tcp-listen:31936,fork,reuseaddr tcp-connect:target:31936 You still need to look up the proper ports to route to but it works for both NodePort and for LoadBalancers. The Proof of concept controller I wrote will handle the normal operations like updating the status with the kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock I'm wondering if this would be of use to anyone else? |
forgot to mention we have a dedicated metallb guide for linux at least. https://kind.sigs.k8s.io/docs/user/loadbalancer/ I think @aojea and @mauilion had some other ideas to discuss as well? @christopherhein that sounds pretty useful, I'm not sure if it's the direction we should progress here for the reasons you mentioned but I'm sure some users would be interested anyhow. |
Yeah, the insecurity side of things with the docker socket is kind of annoying, it would be easy to reduce the risk by only scheduling on control plane nodes and limiting the docker mount to that kind node but still. hard part for me would be figuring out where I could share this code base :) |
Does the guide still work now that docker is no longer used under-the-hood in the latest version of kind? Absolutely guessing at what's wrong with my setup but I've used this before, following the docs around load balancers, and it just doesn't seem to work anymore. (I'm running kind in docker, but I can see that kind is running containerd underneath... if the bridge network belongs to docker on the outside, I don't see how containerd can talk on it from inside, but I'm not a networking expert, the errors I'm getting are "no route to host") The docker machine itself shows up in my arp tables and responds to pings:
but the load balancers I created in the same IP range in ARP mode seem to be no-shows:
I'm happy to try it on a different earlier version although I can't tear this one down right now, I just wondered if anyone already observed an issue with this configuration recently and it just maybe hasn't been logged. FWIW, I did find this issue logged against metallb: which suggested disabling IPv6 in order to reach the load balancer IPs, and I am now having success with that method as I write this... (I'm at least able to reach one of my load balancers now, from the host and from the other side of tailnet, and the other one is not being advertised, as far as I can tell that's a problem downstream not related to metallb...) |
To clarify: KIND switched to containerd inside of the node containers before this issue was even filed, back in may 2019. The guide has received many updates and should be working. The rest of your comment is somewhat ambiguous due to loose terminology around e.g. "in docker" (usually means inside a docker container, but given context I think you just mean docker nodes not podman nodes) or "docker machine" (I think you mean a node container but could be the VM or physical machine in which you're running docker which then has the node containers). Please file as a new support or bug issue with all the details of your host environment, to keep discussion organized and provide enough details to help diagnose. |
I don't have an issue at this point and I don't know that this issue needs to remain open either, although I am not sure I read the full history, I came to this issue and reported here because I was having trouble and it was open, so from my perspective it was ambiguous whether the guide should be expected to work. I'd suggest to close this issue if nobody is actively having any problems that can really be attributed to kind now. Farming out the feature to metallb and covering it with docs on the KinD side seems like all that is needed. It is documented, and the documentation is super clear. No action needed from my perspective. Otherwise sorry for contributing to the noise floor, if your mind is made up that Kind should support load balancers in a more direct or first-class way, I think the support as it is today is just fine. |
No worries, I just can't tell enough to debug your specific issue yet, and we should have that discussion separately. If you need help with that in the future, please do file one and we'll try to help figure it out. As far as this issue, the severe limitations on mac and windows are still problematic, most cluster tools provide a working reachable loadbalancer out of the box, it would be great to ship something that handled this more intelligently, we just haven't had the time yet. Theoretically, one could write a custom controller and tunnel the traffic to the host + the docker network simultaneously with some careful hackery, and consider making it a standard part of kind clusters. |
@christopherhein thanks for the hint about the |
xref #3086 which is ongoing now. See https://github.com/kubernetes-sigs/cloud-provider-kind for an early implementation. |
/close This is done now |
@aojea we should really add it to the kind docs and move out the existing loadbalancer docs to another page or something. |
What would you like to be added:
A Load Balancer
Why is this needed:
#411 (comment)
The text was updated successfully, but these errors were encountered: