-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker is required for container runtime even though I am using containerd #2364
Comments
your workaround is to skip the phase for now
i just tried removing the docker binary and this command fails for me too.
the one solution here is to fetch what CRI socket is on this Node object, but this means we need to know the node name.
the alternative is to require the user to pass |
we did remove the so it seems appropriate to fetch it from the Node object. cc @fabriziopandini @SataQiu WDYT? BTW @SataQiu looks like this wasn't a sufficient fix: ... or instead of fetching the Node cri-socket, we may have to apply CRI socket detection here: /kind feature |
@brianmay i'd assume this is a problem for |
Yes, this is on upgrades. As above, looks like there might be a work around via the --config parameter. Will try asap. |
Sorry, ignore my previous response. I was getting confused. What is the difference between "upgrade apply" and "upgrade node"? Can I pass a config file that only sets |
I have the same issue when calling the print-join-command.
I got:
I did not really understand all the discussion about this warning. Should we ignore this? Joining a worker node to the master is working fine - even without docker daemon installed. And the cluster seems to work. |
@rsoika I believe that warning can be ignored. It is the hard error I was getting that cannot be ignored. I am hoping I might be able to resolve this without converting my cluster back to docker... But so far everyone seems to be rather quiet on the subject of a solution or even a workaround. Unless of course kubeadm 1.20.1 has made any changes to fix this? |
we should fix this after the holidays. |
@neolit123 Great news, thanks. |
after I remove /var/run/docker-shim.sock and /var/run/docker.sock, the command works. |
removing /var/run/docker*.sock is actually a good solution. when no config file is passed to a command kubeadm (with an explicit socket) and if the docker socket is present on the host it will take priority. |
In my case I don't have anything that matches I do have a |
Here is a KISS workaround: |
@AleksandrNull So I guess this means that the docker call's aren't actually required for the upgrade to work? If so, good to know. |
I was hitting this exact error when trying to use kubeadm upgrade apply. @AleksandrNull solution worked perfectly and I was able to upgrade my dev cluster to 1.19.5 this morning. |
@brianmay That's correct. It basically checks docker binary and trying to pre-pull images. Pulling images (using docker) is absolutely useless with containerd runtime as default so this "mock" does no harm. |
@brianmay i tested and looked at the code today, it looks fine. my guess is that you switched to containerd but the CRI socket on that Node object remains for docker.
if you patch/edit the kubeadm.alpha.kubernetes.io/cri-socket value the kubeadm command should work. kubeadm does not really support switching container runtimes on the fly or similar reconfiguration during upgrade... please check this discussion: and watch this ticket: |
this PR should make all commands that don't need the container runtime to not check for running docker or crtctl: |
@pacoxu would you have time to backport your PR to 1.18, 1.19, 1.20? |
@neolit123 ok let me do it |
So for every control plane node I get:
Can I confirm that this - or similar- is the correct command to fix (for every control plane node):
If I had known that you were still writing migration documentation, I might have waited. It is perhaps unfortunate that docker-shim was announced as deprecated, you should migrate over, etc, before the documentation was complete. And often projects don't bother with upgrade instructions :-(. But regardless, thanks for the references supplied above to the PR and issue. For the record the migration was relatively straight forward. Nothing on my system depends on Docker. Except the CNI file, which has somewhat painful to work out, particularly as I am using dual IPv4 and IPv6, and need to supply multiple subnets. Supposedly the auto-generated file was suppose to appear in my logs from before the migration, but I looked and looked and couldn't find it. I think I worked it out, but my solution does involve hard coding the nodes' subnet ranges. IIRC I tried "usePodCidr" for the IPv4 subnet and got loud objections. Would be nice if I didn't have to do this. But is acceptable for this cluster.
|
i warned about this on the deprecation PR: |
we should include this in the migration guide (TBD).
i didn't have to do this when i tried migrating docker -> containerd but it was a single stack v4. closing as the main issue is explained and the side issues were addressed by PR. |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Revised the above command:
It looks good now. |
I think this command need to communicate with apiserver, if apiserver is stopped how to update the value of I refer to this guide: https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/change-runtime-containerd/
The problem is that the kubelet is already stopped, and the apiserver pod is also stopped, I can't run |
for single control plane clusters this can be a problem, yes. you can log an issue in kubernetes/website about it. the annotation can be safely edited before the kubelet is stopped. |
Thanks a loooooot. Confused for a few days. |
I created this script to migrate from docker to containerd. This works on oracle linux and rocky linux.
This has worked for me, these past few days, as I am working on automating this task, as well as upgrading the entire system to rocky linux 9 along with kubernetes, to the latest version. I think that due to this post, I finally know why my kubernetes upgrades were failing. |
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT
Versions
kubeadm version (use
kubeadm version
):root@kube-master:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:57:36Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
kubectl version
): 0.20.0uname -a
): 4.19.0-13-amd64What happened?
kubadm upgrade node tries to run docker, but I have switched to containerd:
What you expected to happen?
"kube upgrade node" like "kubeadm config images pull" should run cri commands, not docker commands.
I think the "docker info" part is related to #2270 - but in that case it is a warning only.
But It looks like the last message is a hard error.
The text was updated successfully, but these errors were encountered: