Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet label migration for NodeRestriction Admission Controller #84912

Closed
muffin87 opened this issue Nov 7, 2019 · 6 comments
Closed

Kubelet label migration for NodeRestriction Admission Controller #84912

muffin87 opened this issue Nov 7, 2019 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@muffin87
Copy link

muffin87 commented Nov 7, 2019

Hello everyone,

I'm currently updating my Kubernetes clusters to version 1.15.x and I saw that Kubelet is reporting an error message I can no longer ignore.

What happened:
Kubelet is showing the following error message about the usage of restricted labels:

kubelet[32161]: W1107 11:39:40.737386   32161 options.go:251] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [kubernetes.io/role node-role.kubernetes.io/master]
kubelet[32161]: W1107 11:39:40.737487   32161 options.go:252] in 1.16, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)

I'm setting the labels with:

--node-labels=node-role.kubernetes.io/node=,kubernetes.io/role=node

I'm using these labels as / to:

  • Make a difference between a master node and a worker
  • Taints for different Daemonsets and Deployments
  • Displaying the role of the node in kubectl get node command.

There is an announcement of that change in the Changelog for 1.13:

Use of the --node-labels flag to set labels under the kubernetes.io/ and k8s.io/ prefix will be subject to restriction by the NodeRestriction admission plugin in future releases.

I also found these entries in the documentation.

The next step for me, was to find a way to migrate the labels to achieve the same behaviour.
When looking into Kubelet source code, I found that there is actually no way to do that.

Seems like the labels are hardcoded: source

What's the plan here?

  • Ignore the Message in Kubelet logs?
  • Disable the NodeRestriction Admission controller?
  • Choose other labels?

There are some references:

How to reproduce it (as minimally and precisely as possible):
Starting a (new) instance (master or worker) and setting the following labels on Kubelet:

--node-labels=node-role.kubernetes.io/node=,kubernetes.io/role=node

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.14.4
  • Cloud provider or hardware configuration: Bare Metal Servers
  • OS (e.g: cat /etc/os-release): CentOS 7.7
  • Kernel (e.g. uname -a): 5.3.1-1.el7.elrepo.x86_64
@muffin87 muffin87 added the kind/bug Categorizes issue or PR as related to a bug. label Nov 7, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Nov 7, 2019
@muffin87
Copy link
Author

muffin87 commented Nov 7, 2019

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 7, 2019
@der-eismann
Copy link

I really don't understand this... in 1.15 it was just a warning, but in 1.16 I can't even start kubelet with the kubernetes.io/role=master label. Changing it to node.kubernetes.io/role=master would be no big deal, but why can't kubectl show this label in the kubectl get nodes overview?

@liggitt
Copy link
Member

liggitt commented Nov 8, 2019

Nodes are not permitted to assert their own role labels. Node roles are typically used to identify privileged or control plane types of nodes, and allowing nodes to label themselves into that pool allows a compromised node to trivially attract workloads (like control plane daemonsets) that confer access to higher privilege credentials. See the design discussion in https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md for the rationale.

You can choose a different label to use for selectors (under the node.kubernetes.io/… namespace) if you want to keep using selectors that are vulnerable to node self-labeling

You can apply the existing node role labels to node objects using kubectl or a controller (I think this is what kubeadm does)

kubectl get nodes can also display custom labels as well (see the --show-labels or --label-columns options)

/close

@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

Nodes are not permitted to assert their own roles labels. Node roles are typically used to identify privileged or control plane types of nodes, and allowing nodes to label themselves into that pool allows a compromised node to trivially attract workloads (like control plane daemonsets) that confer access to higher privilege credentials. See the design discussion in https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md for the rationale.

You can:

  • choose a different label to use for selectors (under the node namespace)
  • apply node role labels to node objects using kubectl or a controller (I think this is what kubeadm does)

kubectl get nodes can also display custom labels as well (see the --show-labels or --label-columns options)

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

ravisantoshgudimetla added a commit to ravisantoshgudimetla/windows-machine-config-operator that referenced this issue Nov 14, 2019
from 'node-role.kubernetes.io' to
'node.openshift.io/os_id=Windows'.The main problem
is kubernetes/kubernetes#84912.
There we're restricting the labels to be added
in kubernetes.io namespace. Now, I am changing
it to label that other entities could
watch for like future controllers or query from like
WSU.
ravisantoshgudimetla added a commit to ravisantoshgudimetla/windows-machine-config-operator that referenced this issue Nov 16, 2019
from 'node-role.kubernetes.io' to
'node.openshift.io/os_id=Windows'.The main problem
is kubernetes/kubernetes#84912.
There we're restricting the labels to be added
in kubernetes.io namespace. Now, I am changing
it to label that other entities could
watch for like future controllers or query from like
WSU. I am also removing the unit test because
we're not getting the nodelabels after parsing
the ignition file anymore
ravisantoshgudimetla added a commit to ravisantoshgudimetla/windows-machine-config-operator that referenced this issue Nov 18, 2019
From 'node-role.kubernetes.io' to
'node.openshift.io/os_id=Windows'. The main problem
is kubernetes/kubernetes#84912.
There we're restricting the labels to be added
in kubernetes.io namespace. Now, I am changing
it to label that other entities could
watch for like future controllers or query from like
WSU. I am also removing the unit test because
we're not getting the nodelabels after parsing
the ignition file anymore
aravindhp pushed a commit to aravindhp/windows-machine-config-bootstrapper that referenced this issue Nov 19, 2019
from 'node-role.kubernetes.io' to
'node.openshift.io/os_id=Windows'.The main problem
is kubernetes/kubernetes#84912.
There we're restricting the labels to be added
in kubernetes.io namespace. Now, I am changing
it to label that other entities could
watch for like future controllers or query from like
WSU. I am also removing the unit test because
we're not getting the nodelabels after parsing
the ignition file anymore
@sshishov
Copy link

Please add the support to show a custom role in kubectl get nodes command. As this is a commonly used command and a lot of people distinguish nodes based on the output of this command. Now it became impossible.

@gecube
Copy link

gecube commented Sep 17, 2024

Good day !

I'd like to open this topic again. What did I find from my kubernetes experience that the most convenient label and/or taint is node-role.kubernetes.io with different suffixes like:
node-role.kubernetes.io/ingress: ""
node-role.kubernetes.io/api: ""
node-role.kubernetes.io/worker: ""
node-role.kubernetes.io/monitoring: ""
etc.

Particularly it is very nice that these roles would be shown in kubectl get nodes output without any extra efforts!

So it would be very nice if the node could set this label/taint on itself, and if somebody worries about security - we can just limit node-role.kubernetes.io/control-plane from playing with it. Just one another line of code... But the benefits will be much more. Also I see that this question is asked by many different people all across the globe multiple times...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

6 participants