You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Container networking plugin (CNI) (e.g. Calico, Cilium): Calio or Weave
Others:
What happened?
In deploying high availability using keepalived/haproxy for a k8s cluster by following this instruction (Creating Highly Available Clusters with kubeadm | Kubernetes), , stacked HA and etcd,
if make the frontend bind server:port consistent with the control-plane-endpoint( as shown in the instruction (sudo kubeadm init --control-plane-endpoint “LOAD_BALANCER_DNS:LOAD_BALANCER_PORT” --upload-certs )
it fails with the following error:
$ sudo kubeadm init --control-plane-endpoint “192.168.222.11:6443” --upload-certs
…
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
backend controllers
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server riscmaster1 192.168.222.201:6443 check
server riscmaster2 192.168.222.202:6443 check
server riscmaster3 192.168.222.203:6443 check
Changed to use bind:8443 however cannot get connected with the control-plane-endpoint from other control nodes as it's a VIP.
Tried to run keepalived and haproxy as static pods, still the same issue.
What you expected to happen?
High availability for a multiple control node kubernetes cluster with using keepalived and haproxy.
How to reproduce it (as minimally and precisely as possible)?
See above "what happened" section
Anything else we need to know?
The text was updated successfully, but these errors were encountered:
You seem to have troubles using Kubernetes and kubeadm.
Note that our issue trackers should not be used for providing support to users.
There are special channels for that purpose.
What keywords did you search in kubeadm issues before filing this one?
If you have found any duplicates, you should instead reply there and close this page.
If you have not found any duplicates, delete this section and continue on.
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT or FEATURE REQUEST
Versions
kubeadm version (use
kubeadm version
):kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.2", GitCommit:"89a4ea3e1e4ddd7f7572286090359983e0387b2f", GitTreeState:"clean", BuildDate:"2023-09-13T09:34:32Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version
):Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
CentOS Linux release 7.9.2009 (Core)
uname -a
): Linux riscmaster0 3.10.0-1160.49.1.el7.x86_64 kubeadm join on slave node fails preflight checks #1 SMP Tue Nov 30 15:51:32 UTC 2021 x86_64 x86_64 x86_64 GNU/LinuxWhat happened?
In deploying high availability using keepalived/haproxy for a k8s cluster by following this instruction (Creating Highly Available Clusters with kubeadm | Kubernetes), , stacked HA and etcd,
if make the frontend bind server:port consistent with the control-plane-endpoint( as shown in the instruction (sudo kubeadm init --control-plane-endpoint “LOAD_BALANCER_DNS:LOAD_BALANCER_PORT” --upload-certs )
it fails with the following error:
$ sudo kubeadm init --control-plane-endpoint “192.168.222.11:6443” --upload-certs
…
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
keepalived.conf
...
vrrp_instance RGW{
state MASTER
interface eth0
virtual_router_id 51
priority 101
authentication {
auth_type PASS
auth_pass 9999
}
virtual_ipaddress {
192.168.222.11/24
}
track_script {
check_apiserver
check_haproxy
}
}
haproxy.cfg
…
frontend frontserver
bind *:6443
mode tcp
stats uri /haproxy?stats
option tcplog
acl url_static path_beg -i /static /images /javascript /stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
default_backend controllers
backend controllers
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server riscmaster1 192.168.222.201:6443 check
server riscmaster2 192.168.222.202:6443 check
server riscmaster3 192.168.222.203:6443 check
Changed to use bind:8443 however cannot get connected with the control-plane-endpoint from other control nodes as it's a VIP.
Tried to run keepalived and haproxy as static pods, still the same issue.
What you expected to happen?
High availability for a multiple control node kubernetes cluster with using keepalived and haproxy.
How to reproduce it (as minimally and precisely as possible)?
See above "what happened" section
Anything else we need to know?
The text was updated successfully, but these errors were encountered: