-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
static pods not visible in kubectl get pods #9937
Comments
https://gist.github.com/zetaab/24f8d1402a11123a09fc35fe4168ec55 so static pods are there (apiserver, scheduled, controller-manager, etcds..), but those are not visible in API We also see this happening in all new clusters that we create automatically in our e2e tests. If cluster is created with 1.17 / 1.18 and updated to 1.19 the static pods will be missing from the API This same happens also in AWS (we do execute e2e tests there as well) I0915 06:53:16.351163 70 instancegroups.go:442] Cluster did not pass validation, will retry in "30s": master "ip-172-20-35-23.eu-north-1.compute.internal" is missing kube-apiserver pod, master "ip-172-20-35-23.eu-north-1.compute.internal" is missing kube-controller-manager pod, master "ip-172-20-35-23.eu-north-1.compute.internal" is missing kube-scheduler pod. |
I can see this in logs: Sep 15 08:31:35 master-zone-1-1-1-rofa1-k8s-local kubelet[3315]: E0915 08:31:35.760761 3315 kubelet.go:1576] Failed creating a mirror pod for "etcd-manager-main-master-zone-1-1-1-rofa1-k8s-local_kube-system(7974b24d667835b08eceeb7e48c06d7c)": pods "etcd-manager-main-master-zone-1-1-1-rofa1-k8s-local" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] so it looks like that if we have PSP turned on, static pods will not be visible in |
This do not affect all kops users, only those ones who are using PSPs. Something wrong with our default rules in https://github.com/kubernetes/kops/blob/master/upup/models/cloudup/resources/addons/podsecuritypolicy.addons.k8s.io/k8s-1.12.yaml.template When going through that all kube-system serviceaccounts should have psp already: |
reproduce instructions (cluster can be created to OpenStack also):
add following:
then wait few minutes |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
master
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.18.3 (tried also using 1.19.1 but still those are not visible)
3. What cloud provider are you using?
openstack / aws
4. What commands did you run? What is the simplest way to reproduce this issue?
I am trying to update clusters from 1.18.8 -> 1.19.1 using latest kops master. After I terminated each master one by one, I cannot see kubernetes critical components anymore in
kubectl get pods -n kube-system
5. What happened after the commands executed?
If I try to execute rolling update the kops cluster validation will fail
For me this looks like all pods are missing that are located in /etc/kubernetes/manifests in each master
6. What did you expect to happen?
I expect that all pods are visible under kube-system
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The text was updated successfully, but these errors were encountered: