-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRI-O: Default podman CNI config prevents use of custom network plugins #8480
Comments
Hey @joshmue thanks for opening this issue. It seems like there's a workaround, but that the behavior isn't intuitive. Maybe adding to documentation is the solution here. I'm not super familiar with CNI, maybe @medyagh or @tstromberg could weigh in? |
Sounds like something for upstream, i.e. podman vs CNI ? They have been fighting before, so that's CNI for you... i.e. when running I think the previous default was to run them in alphanumerical order, thus the weird "87" prefix ? For later CRI-O, there should be a configuration entry: # The default CNI network name to be selected. If not set or "", then
# CRI-O will pick-up the first one found in network_dir.
# cni_default_network = "" So that can be used to avoid the podman CNI network ? I don't think that Docker uses CNI, so that's probably why. |
@priyawadhwa @afbjorklund Thank you for your responses!
That would explain a lot, indeed. I just suspected it, as in the cilium/minikube walkthrough explicitly set
I was not able to test this yet, as this apparently did not land in minikube yet. The tweaking of the configuration would be only possible after the cluster started up, right? I think this may lead to pods bound to different networks - just like with the workaround I described in the top. E. g. the control plane's CoreDNS's are provisioned using the default bridge; Workload pods created later get IP's from the custom network plugin. Perhaps it would be an option to handle IIRC, as result, this would match the behavior of "traditional" kubeadm installations. With Of course, most people most likely just want to get started with minikube as quick as possible, so defaulting Please correct me if I missed anything. |
We added the podman CNI so that power users can use It is not needed for Kubernetes, but we are providing both services: Docker/Podman and Kubernetes It is supposed to be possible to have multiple CNI installed (and configured), without it exploding ? Anyway, the /etc/cni/net.d config file is not created by |
@joshmue - we recently refactored our CNI configurations, but tried to stick as best as possible to preserving existing behaviors. In this case, I think it's possible that we do better now? Please try:
Please note that I did not change |
@tstromberg Thank you for the information! I did first quick tests with this flag - without success so far. I will report back with more qualified feedback ASAP. |
I was able to do a little more testing now. When just using Using |
@joshmue thank you for updating the issue, did that answer solve the issue ? or do you think we could still do anything on minikube side to make the experience better on this issue ? |
Hi @medyagh! While the steps written down in my initial comment are now kind of outdated, it would be beneficial to have the If you want me to update this issue's title or test something, please say so. |
Retested with minikube 1.14.2: Works now without any problems or workarounds. 🎊 Thank you all! |
Steps to reproduce the issue:
minikube start --network-plugin=cni --container-runtime=cri-o
without--enable-default-cni
minikube start
).Apparent cause of the issue:
It turns out that there is a default podman bridge network CNI configuration at
/etc/cni/net.d/87-podman-bridge.conflist
in the minikube VM which prevents the custom network plugin to be used.If using docker as container runtime, this default configuration does not seem to cause any problems.
Workaround:
Deleting this CNI configuration, restarting CRI-O and recreating the concerned pods causes the NetworkPolicies to be enforced as expected.
The next time the minikube VM is started,
/etc/cni/net.d/87-podman-bridge.conflist
will be present again.Thoughts:
Actually, am not sure if that is an issue with minikube or if cri-o just handles prioritization differently than docker. Either way, when using minikube, this behavior is unexpected, IMHO.
The text was updated successfully, but these errors were encountered: