Skip to content

Commit

Permalink
Describe appropriate use of node role labels and fixes that should be…
Browse files Browse the repository at this point in the history
… made

node-roles were not intended for internal use and this KEP clarifies both
their use and describes the process for resolving their internal use.
  • Loading branch information
smarterclayton committed Jul 17, 2019
1 parent 79119ce commit 06fd04d
Showing 1 changed file with 223 additions and 0 deletions.
223 changes: 223 additions & 0 deletions keps/sig-architecture/2019-07-16-node-role-label-use.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,223 @@
---
title: Appropriate use of node-role labels
authors:
- "@smarterclayton"
owning-sig: sig-architecture
participating-sigs:
- sig-api-machinery
- sig-network
- sig-node
- sig-testing
reviewers:
- "@lavalamp"
- "@derekwaynecarr"
- "@liggitt"
approvers:
- "@thockin"
- "@derekwaynecarr"
creation-date: 2019-07-16
last-updated: 2019-07-16
status: implementable
---

# Appropriate use of node-role labels

## Table of Contents

<!-- toc -->
- [Release Signoff Checklist](#release-signoff-checklist)
- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Proposal](#proposal)
- [Use of <code>node-role.kubernetes.io/*</code> labels](#use-of--labels)
- [Migrating existing deployments](#migrating-existing-deployments)
- [Current users of <code>node-role.kubernetes.io/*</code> within the project that must change](#current-users-of--within-the-project-that-must-change)
- [Service load-balancer](#service-load-balancer)
- [Node controller excludes master nodes from consideration for eviction](#node-controller-excludes-master-nodes-from-consideration-for-eviction)
- [Kubernetes e2e tests](#kubernetes-e2e-tests)
- [Design Details](#design-details)
- [Test Plan](#test-plan)
- [Graduation Criteria](#graduation-criteria)
- [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy)
- [Version Skew Strategy](#version-skew-strategy)
- [Implementation History](#implementation-history)
- [Alternatives](#alternatives)
- [Reference](#reference)
<!-- /toc -->

## Release Signoff Checklist

**ACTION REQUIRED:** In order to merge code into a release, there must be an issue in [kubernetes/enhancements] referencing this KEP and targeting a release milestone **before [Enhancement Freeze](https://github.com/kubernetes/sig-release/tree/master/releases)
of the targeted release**.

These checklist items _must_ be updated for the enhancement to be released.

- [ ] kubernetes/enhancements issue in release milestone, which links to KEP: https://github.com/kubernetes/enhancements/issues/1143
- [ ] KEP approvers have set the KEP status to `implementable`
- [ ] Design details are appropriately documented
- [ ] Test plan is in place, giving consideration to SIG Architecture and SIG Testing input
- [ ] Graduation criteria is in place
- [ ] "Implementation History" section is up-to-date for milestone
- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
- [ ] Supporting documentation e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes

**Note:** Any PRs to move a KEP to `implementable` or significant changes once it is marked `implementable` should be approved by each of the KEP approvers. If any of those approvers is no longer appropriate than changes to that list should be approved by the remaining approvers and/or the owning SIG (or SIG-arch for cross cutting KEPs).

**Note:** This checklist is iterative and should be reviewed and updated every time this enhancement is being considered for a milestone.

[kubernetes.io]: https://kubernetes.io/
[kubernetes/enhancements]: https://github.com/kubernetes/enhancements/issues
[kubernetes/kubernetes]: https://github.com/kubernetes/kubernetes
[kubernetes/website]: https://github.com/kubernetes/website

## Summary

Clarify that the `node-role.kubernetes.io/*` label is for use only by users and external projects and may not be used to vary
Kubernetes behavior. Define migration process for all internal consumers of these labels.

## Motivation

The `node-role.kubernetes.io/master` (and the broader `node-role.kubernetes.io` namespace for labels) was introduced
to provide a simple organizational and grouping convention for cluster users. The labels were reserved solely for
organizing nodes via a convention that tools could recognize to display information to end users, and for use by
opinionated external tooling that wished to simplify conventions for end users. Use of the label by components
within the Kubernetes project (those projects subject to API review) was expressely forbidden. Specifically, no
project could mandate the use of those labels in a conformant distribution, since we anticipated that many deployments
of Kubernetes would have more nuanced control-plane topologies than simply "a control plane node".

Over time, several changes to Kubernetes core and related projects were introduced that depended on the
`node-role.kubernetes.io/master` label to vary their behavior in contravention to the guidance the label was
approved under. This was unintentional and due to unclear reviewer guidelines that have since been more strictly
enforced. Likewise, the complexity of Kubernetes deployments has increased and the simplistic mapping of control
plane concepts to a node has proven to limit the ability of conformant Kubernetes distributions to self-host, as
anticipated.


### Goals

This KEP:

* Clarifies that the use of the `node-role.kubernetes/*` label namespace is reserved solely for end-user and
external Kubernetes consumers, and:
* May not be used to vary behavior within Kubernetes projects that are subject to API review (kubernetes/kubernetes
and all components that expose APIs under the `*.k8s.io` namespace)
* Is not required to be used for a cluster to be conformant
* Describes the locations within Kubernetes that must be changed to use an alternative mechanism for behavior
* Suggests approaches for each
* Describes the timeframe and migration process for Kubernetes distributions and deployments to update labels


## Proposal

### Use of `node-role.kubernetes.io/*` labels

* Kubernetes components MUST NOT set or alter behavior on any label within the `node-role.kubernetes.io/*` namespace.
* Kubernetes components (such as `kubectl`) MAY simplify the display of `node-role.kubernetes.io/*` labels to convey the node roles of a node
* Kubernetes examples and documentation MUST NOT leverage the node-role labels for node placement
* External users, administrators, conformant Kubernetes distributions, and extensions MAY use `node-role.kubernetes.io/*` without reservation
* Extensions are recommended not to vary behavior based on node-role, but MAY do so as they wish
* First party components like `kubeadm` MAY use node-roles to simplify their own deployment mechanisms.
* Conformance tests MUST NOT depend on the node-role labels in any fashion


### Migrating existing deployments

The proposed fixes will all require deployment-level changes. That must be staged across several releases, and it should be possible for
deployers to move early and "fix" the issues that may be caused by their topology.

Therefore, for each change we recommend the following process to adopt the new labels in successive releases:

* Release 1 (1.16):
* Introduce a feature gate for disabling node-role being honored. The gate defaults to on. `NodeRoleBehavior=on`
* Define the new node label with an associated feature gate for each feature area. The gate defaults to off. `FeatureA=off`
* Behavior for each functional area is defined as `NodeRoleBehavior == on && node_has_role || FeatureA == on && node_has_label`
* Deprecation of all node role behavior is announced
* No new components may leverage node-roles within Kubernetes projects.
* Early adopters may label their nodes to opt in to the features, even in the absence of the gate.
* Release 2 (1.17):
* For each new node label, usage is reviewed and as appropriate the label is declared beta/GA and the feature gate is set on
* All Kubernetes deployments should be updated to add node labels as appropriate: `kubectl label nodes -l node-role.kubernetes.io/master LABEL_A=VALUE_A`
* Documentation will be provided on making the transition
* Deployments may set `NodeRoleBehavior=off` after they have set the appropriate labels.
* NOTE: Release 3 starts when all labels graduate to beta
* Release 3 (1.18):
* Default the `NodeRoleBehavior=off`. Admins whose deployments still use the old labels may set `NodeRoleBehavior=on` during 1.17 to get the legacy behavior.
* Deployments should stop setting `NodeRoleBehavior=off` if they opted out early.
* Release 4 (1.19):
* The `NodeRoleBehavior` gate and all feature-level gates are removed, components that attempt to set these gates will fail to start.
* Code that references node-roles within Kubernetes will be removed.

In Release 4 (which could be as early as 1.19) this KEP will be considered complete.


### Current users of `node-role.kubernetes.io/*` within the project that must change

The following components vary behavior based on the presence of the node-role labels:


#### Service load-balancer

The service load balancer implementation previously implemented a heuristic where `node-role.kubernetes.io/master` is used to exclude masters
from the candidate nodes for a service. This is an implementation detail of the cluster and is not allowed. Since there is value in excluding
nodes from service load balancer candidacy in some deployments, an alpha feature gated label `alpha.service-controller.kubernetes.io/exclude-balancer`
was added in Kubernetes 1.9.

This label should be moved to beta in Kube 1.16 at its final name `service-controller.kubernetes.io/exclude-balancer`, its feature gate `ServiceNodeExclusion` should default on in 1.17, the gate `ServiceNodeExclusion` should be declared GA in 1.18, and the gate will be removed in 1.19. The old alpha label should be honored in 1.16 and removed in 1.17.

Starting in 1.16 the legacy code block should be gated on `NodeRoleBehavior=on`


#### Node controller excludes master nodes from consideration for eviction

The `k8s.io/kubernetes/pkg/util/system/IsMasterNode(nodeName)` function is used by the NodeLifecycleController to exclude nodes with a node name
that ends in `master` or starts with `master-` when considering whether to mark nodes as disrupted. A recent PR attempted to change this to use node-roles and was blocked. Instead, the controller should be updated to use a label `node-controller.kubernetes.io/exclude-network-check` to decide whether to exclude nodes from being considered for disruption handling.


#### Kubernetes e2e tests

The e2e tests use a number of heuristics including the `IsMasterNode(nodeName)` function and the node-roles labels to select nodes. In order for conformant Kubernetes clusters to run the tests, the e2e suite must change to use individual user-provided label selectors to identify nodes to test, nodes that have special rules for testing unusual cases, and for other selection behaviors. The label selectors may be defaulted by the test code to their current values, as long as a conformant cluster operator can execute the e2e suite against an arbitrary cluster.

QUESTION: Is a single label selector sufficient to identify nodes to test?


## Design Details

### Test Plan

* Unit tests to verify selection using feature gates

### Graduation Criteria

* New labels and feature flags become beta after one release, GA and defaulted on after two, and are removed after 4 releases.
* Documentation for migrating to the new labels is available in 1.17.

### Upgrade / Downgrade Strategy

As described in the migration process, deployers and administrators have 2 releases to migrate their clusters.

### Version Skew Strategy

Controllers are updated after the control plane, so consumers must update the labels on their nodes before they update controller
processes in 1.18.

## Implementation History

- 2019-07-16: Created

## Alternatives

Allowing core components to use node-role labels was considered and rejected because:

1. The number of impacted components and clusters is anticipated to be small
2. There are 4 releases to update within
3. Some deployment topologies (self-hosted clusters) cannot use service load balancers for control plane components today
4. We wish to prevent additional confusion by providing clear guidance.


## Reference

* https://github.com/kubernetes/kubernetes/pull/35975
* https://github.com/kubernetes/kubernetes/pull/39112
* https://github.com/kubernetes/kubernetes/pull/76654
* https://github.com/kubernetes/kubernetes/pull/80021

0 comments on commit 06fd04d

Please sign in to comment.