-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubelet label migration for NodeRestriction Admission Controller #84912
Comments
/sig node |
I really don't understand this... in 1.15 it was just a warning, but in 1.16 I can't even start kubelet with the |
Nodes are not permitted to assert their own role labels. Node roles are typically used to identify privileged or control plane types of nodes, and allowing nodes to label themselves into that pool allows a compromised node to trivially attract workloads (like control plane daemonsets) that confer access to higher privilege credentials. See the design discussion in https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md for the rationale. You can choose a different label to use for selectors (under the node.kubernetes.io/… namespace) if you want to keep using selectors that are vulnerable to node self-labeling You can apply the existing node role labels to node objects using kubectl or a controller (I think this is what kubeadm does)
/close |
@liggitt: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
from 'node-role.kubernetes.io' to 'node.openshift.io/os_id=Windows'.The main problem is kubernetes/kubernetes#84912. There we're restricting the labels to be added in kubernetes.io namespace. Now, I am changing it to label that other entities could watch for like future controllers or query from like WSU.
from 'node-role.kubernetes.io' to 'node.openshift.io/os_id=Windows'.The main problem is kubernetes/kubernetes#84912. There we're restricting the labels to be added in kubernetes.io namespace. Now, I am changing it to label that other entities could watch for like future controllers or query from like WSU. I am also removing the unit test because we're not getting the nodelabels after parsing the ignition file anymore
From 'node-role.kubernetes.io' to 'node.openshift.io/os_id=Windows'. The main problem is kubernetes/kubernetes#84912. There we're restricting the labels to be added in kubernetes.io namespace. Now, I am changing it to label that other entities could watch for like future controllers or query from like WSU. I am also removing the unit test because we're not getting the nodelabels after parsing the ignition file anymore
from 'node-role.kubernetes.io' to 'node.openshift.io/os_id=Windows'.The main problem is kubernetes/kubernetes#84912. There we're restricting the labels to be added in kubernetes.io namespace. Now, I am changing it to label that other entities could watch for like future controllers or query from like WSU. I am also removing the unit test because we're not getting the nodelabels after parsing the ignition file anymore
Please add the support to show a custom role in |
This reverts commit 2c4058a. Reason for revert: kubernetes/kubernetes#84912 (comment) Change-Id: I4495afd56dfa0e264fe06550693b33fc41b4d49f
Good day ! I'd like to open this topic again. What did I find from my kubernetes experience that the most convenient label and/or taint is Particularly it is very nice that these roles would be shown in So it would be very nice if the node could set this label/taint on itself, and if somebody worries about security - we can just limit |
Hello everyone,
I'm currently updating my Kubernetes clusters to version 1.15.x and I saw that Kubelet is reporting an error message I can no longer ignore.
What happened:
Kubelet is showing the following error message about the usage of restricted labels:
I'm setting the labels with:
I'm using these labels as / to:
kubectl get node
command.There is an announcement of that change in the Changelog for 1.13:
I also found these entries in the documentation.
The next step for me, was to find a way to migrate the labels to achieve the same behaviour.
When looking into Kubelet source code, I found that there is actually no way to do that.
Seems like the labels are hardcoded: source
What's the plan here?
There are some references:
How to reproduce it (as minimally and precisely as possible):
Starting a (new) instance (master or worker) and setting the following labels on Kubelet:
Anything else we need to know?:
Environment:
kubectl version
): 1.14.4cat /etc/os-release
): CentOS 7.7uname -a
): 5.3.1-1.el7.elrepo.x86_64The text was updated successfully, but these errors were encountered: