Skip to content

Commit

Permalink
Merge pull request #48469 from jsturtevant/win-mem-cpu-feautre
Browse files Browse the repository at this point in the history
Docs: Windows CPU and Memory Affinity
  • Loading branch information
k8s-ci-robot authored Nov 25, 2024
2 parents fc73d7b + 7f411ed commit 64ac7c4
Show file tree
Hide file tree
Showing 4 changed files with 62 additions and 3 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
title: WindowsCPUAndMemoryAffinity
content_type: feature_gate

_build:
list: never
render: false

stages:
- stage: alpha
defaultValue: false
fromVersion: "1.32"
---

Add CPU and Memory Affinity support to Windows nodes with [CPUManager](/docs/tasks/administer-cluster/cpu-management-policies/#windows-support),
[MemoryManager](/docs/tasks/administer-cluster/memory-manager/#windows-support)
and topology manager.
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,14 @@ However, in workloads where CPU cache affinity and scheduling latency
significantly affect workload performance, the kubelet allows alternative CPU
management policies to determine some placement preferences on the node.

## Windows Support

{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}

CPU Manager support can be enabled on Windows by using the `WindowsCPUAndMemoryAffinity` feature gate
and it requires support in the container runtime.
Once the feature gate is enabled, follow the steps below to configure the [CPU manager policy](#configuration).

### Configuration

The CPU Manager policy is set with the `--cpu-manager-policy` kubelet
Expand Down
33 changes: 30 additions & 3 deletions content/en/docs/tasks/administer-cluster/memory-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Preceding v1.22, the `kubelet` must be started with the following flag:

in order to enable the Memory Manager feature.

## How Memory Manager Operates?
## How does the Memory Manager Operate?

The Memory Manager currently offers the guaranteed memory (and hugepages) allocation
for Pods in Guaranteed QoS class.
Expand All @@ -57,7 +57,7 @@ prepare and deploy a `Guaranteed` pod as illustrated in the section

The Memory Manager is a Hint Provider, and it provides topology hints for
the Topology Manager which then aligns the requested resources according to these topology hints.
It also enforces `cgroups` (i.e. `cpuset.mems`) for pods.
On Linux, it also enforces `cgroups` (i.e. `cpuset.mems`) for pods.
The complete flow diagram concerning pod admission and deployment process is illustrated in
[Memory Manager KEP: Design Overview][4] and below:

Expand Down Expand Up @@ -91,6 +91,14 @@ The problem has been solved as elaborated in
Also, reference [Memory Manager KEP: Simulation - how the Memory Manager works? (by examples)][1]
illustrates how the management of groups occurs.

### Windows Support

{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}

Windows support can be enabled via the `WindowsCPUAndMemoryAffinity` feature gate
and it requires support in the container runtime.
Only the [BestEffort Policy](#policy-best-effort) is supported on Windows.

## Memory Manager configuration

Other Managers should be first pre-configured. Next, the Memory Manager feature should be enabled
Expand All @@ -103,7 +111,8 @@ node stability (section [Reserved memory flag](#reserved-memory-flag)).
Memory Manager supports two policies. You can select a policy via a `kubelet` flag `--memory-manager-policy`:

* `None` (default)
* `Static`
* `Static` (Linux only)
* `BestEffort` (Windows Only)

#### None policy {#policy-none}

Expand All @@ -123,6 +132,24 @@ In the case of the `BestEffort` or `Burstable` pod, the `Static` Memory Manager
the default topology hint as there is no request for the guaranteed memory,
and does not reserve the memory in the internal [NodeMap][2] object.

This policy is only supported on Linux.

#### BestEffort policy {#policy-best-effort}

{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}

This policy is only supported on Windows.

On Windows, NUMA node assignment works differently than Linux.
There is no mechanism to ensure that Memory access only comes from a specific NUMA node.
Instead the Windows scheduler will select the most optimal NUMA node based on the CPU(s) assignments.
It is possible that Windows might use other NUMA nodes if deemed optimal by the Windows scheduler.

The policy does track the amount of memory available and requested through the internal [NodeMap][2].
The memory manager will make a best effort at ensuring that enough memory is available on
a NUMA node before making the assignment.
This means that in most cases memory assignment should function as expected.

### Reserved memory flag

The [Node Allocatable](/docs/tasks/administer-cluster/reserve-compute-resources/) mechanism
Expand Down
7 changes: 7 additions & 0 deletions content/en/docs/tasks/administer-cluster/topology-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,13 @@ the pod can be accepted or rejected from the node based on the selected hint.
The hint is then stored in the Topology Manager for use by the *Hint Providers* when making the
resource allocation decisions.

## Windows Support

{{< feature-state feature_gate_name="WindowsCPUAndMemoryAffinity" >}}

The Topology Manager support can be enabled on Windows by using the `WindowsCPUAndMemoryAffinity` feature gate and
it requires support in the container runtime.

## Topology manager scopes and policies

The Topology Manager currently:
Expand Down

0 comments on commit 64ac7c4

Please sign in to comment.