-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sealos 4.0 老版本卸载安装 kubernetes:v1.24.2 无法启动 #1251
Comments
|
老版本是多少的版本 |
7月 05 14:34:30 k8s.10.0.0.101 containerd[14575]: time="2022-07-05T14:34:30.370124062+08:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" |
老版本是sealos:3.0 |
把kubelet打全一点,这些日志都不是关键信息 |
安装程序日志
2022-07-05 11:49:34 [EROR] Applied to cluster error: failed to init init master0 failed, error: failed to execute command(kubeadm init --config=/var/lib/sealos/data/default/etc/kubeadm-init.yaml --skip-certificate-key-print --skip-token-print -v 0 --ignore-preflight-errors=SystemVerification) on host(10.0.10.101:22): output(W0705 11:45:15.060301 33945 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0705 11:45:34.491781 33945 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://10.0.10.101:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0705 11:45:34.669181 33945 kubeconfig.go:249] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://10.0.10.101:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher), error(Process exited with status 1). Please clean and reinstall
containerd日志
月 05 11:49:36 k8s.10.0.0.101 containerd[32343]: time="2022-07-05T11:49:36.280114877+08:00" level=error msg="PullImage "k8s.gcr.io/kube-scheduler:v1.24.2" failed" error="failed to pull and unpack image "k8s.gcr.io/kube-scheduler:v1.24.2": failed to resolve reference "k8s.gcr.io/kube-scheduler:v1.24.2": failed to do request: Head "https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.24.2\": dial tcp 142.250.157.82:443: i/o timeout"
7月 05 11:50:06 k8s.10.0.0.101 containerd[32343]: time="2022-07-05T11:50:06.282947711+08:00" level=error msg="PullImage "k8s.gcr.io/etcd:3.5.3-0" failed" error="failed to pull and unpack image "k8s.gcr.io/etcd:3.5.3-0": failed to resolve reference "k8s.gcr.io/etcd:3.5.3-0": failed to do request: Head "https://k8s.gcr.io/v2/etcd/manifests/3.5.3-0\": dial tcp 142.250.157.82:443: i/o timeout"
7月 05 11:50:36 k8s.10.0.0.101 containerd[32343]: time="2022-07-05T11:50:36.285500460+08:00" level=error msg="PullImage "k8s.gcr.io/kube-controller-manager:v1.24.2" failed" error="failed to pull and unpack image "k8s.gcr.io/kube-controller-manager:v1.24.2": failed to resolve reference "k8s.gcr.io/kube-controller-manager:v1.24.2": failed to do request: Head "https://k8s.gcr.io/v2/kube-controller-manager/manifests/v1.24.2\": dial tcp 142.250.157.82:443: i/o timeout"
7月 05 11:51:06 k8s.10.0.0.101 containerd[32343]: time="2022-07-05T11:51:06.288345405+08:00" level=error msg="PullImage "k8s.gcr.io/kube-apiserver:v1.24.2" failed" error="failed to pull and unpack image "k8s.gcr.io/kube-apiserver:v1.24.2": failed to resolve reference "k8s.gcr.io/kube-apiserver:v1.24.2": failed to do request: Head "https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.24.2\": dial tcp 142.250.157.82:443: i/o timeout"
7月 05 11:51:36 k8s.10.0.0.101 containerd[32343]: time="2022-07-05T11:51:36.291146322+08:00" level=error msg="PullImage "k8s.gcr.io/kube-scheduler:v1.24.2" failed" error="failed to pull and unpack image "k8s.gcr.io/kube-scheduler:v1.24.2": failed to resolve reference "k8s.gcr.io/kube-scheduler:v1.24.2": failed to do request: Head "https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.24.2\": dial tcp 142.250.157.82:443: i/o timeout"
containerd 无容器启动
crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
The text was updated successfully, but these errors were encountered: