diff --git a/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md b/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md index 92120605a22ae..c17453a6def84 100644 --- a/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md +++ b/content/en/blog/_posts/2022-05-23-service-ip-dynamic-and-static-allocation.md @@ -86,7 +86,7 @@ Examples: #### Service IP CIDR block: 10.96.0.0/24 Range Size: 28 - 2 = 254 -Band Offset: `min(max(16,256/16),256)` = `min(16,256)` = 16 +Band Offset: `min(max(16, 256/16), 256)` = `min(16, 256)` = 16 Static band start: 10.96.0.1 Static band end: 10.96.0.16 Range end: 10.96.0.254 @@ -101,7 +101,7 @@ pie showData #### Service IP CIDR block: 10.96.0.0/20 Range Size: 212 - 2 = 4094 -Band Offset: `min(max(16,256/16),256)` = `min(256,256)` = 256 +Band Offset: `min(max(16, 4096/16), 256)` = `min(256, 256)` = 256 Static band start: 10.96.0.1 Static band end: 10.96.1.0 Range end: 10.96.15.254 @@ -116,7 +116,7 @@ pie showData #### Service IP CIDR block: 10.96.0.0/16 Range Size: 216 - 2 = 65534 -Band Offset: `min(max(16,65536/16),256)` = `min(4096,256)` = 256 +Band Offset: `min(max(16, 65536/16), 256)` = `min(4096, 256)` = 256 Static band start: 10.96.0.1 Static band ends: 10.96.1.0 Range end: 10.96.255.254 diff --git a/content/en/docs/concepts/architecture/control-plane-node-communication.md b/content/en/docs/concepts/architecture/control-plane-node-communication.md index a4814aab4b45e..e2004f146c18b 100644 --- a/content/en/docs/concepts/architecture/control-plane-node-communication.md +++ b/content/en/docs/concepts/architecture/control-plane-node-communication.md @@ -21,7 +21,7 @@ This document catalogs the communication paths between the control plane (apiser Kubernetes has a "hub-and-spoke" API pattern. All API usage from nodes (or the pods they run) terminates at the apiserver. None of the other control plane components are designed to expose remote services. The apiserver is configured to listen for remote connections on a secure HTTPS port (typically 443) with one or more forms of client [authentication](/docs/reference/access-authn-authz/authentication/) enabled. One or more forms of [authorization](/docs/reference/access-authn-authz/authorization/) should be enabled, especially if [anonymous requests](/docs/reference/access-authn-authz/authentication/#anonymous-requests) or [service account tokens](/docs/reference/access-authn-authz/authentication/#service-account-tokens) are allowed. -Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. +Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials. A good approach is that the client credentials provided to the kubelet are in the form of a client certificate. See [kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for automated provisioning of kubelet client certificates. Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. The `kubernetes` service (in `default` namespace) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. @@ -49,7 +49,7 @@ To verify this connection, use the `--kubelet-certificate-authority` flag to pro If that is not possible, use [SSH tunneling](#ssh-tunnels) between the apiserver and kubelet if required to avoid connecting over an untrusted or public network. -Finally, [Kubelet authentication and/or authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) should be enabled to secure the kubelet API. +Finally, [Kubelet authentication and/or authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) should be enabled to secure the kubelet API. ### apiserver to nodes, pods, and services diff --git a/content/en/docs/concepts/cluster-administration/_index.md b/content/en/docs/concepts/cluster-administration/_index.md index 7e5827a6f7a2c..ace5297b330cf 100644 --- a/content/en/docs/concepts/cluster-administration/_index.md +++ b/content/en/docs/concepts/cluster-administration/_index.md @@ -63,8 +63,8 @@ Before choosing a guide, here are some considerations: ### Securing the kubelet * [Control Plane-Node communication](/docs/concepts/architecture/control-plane-node-communication/) - * [TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) - * [Kubelet authentication/authorization](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) + * [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) + * [Kubelet authentication/authorization](/docs/reference/acess-authn-authz/kubelet-authn-authz/) ## Optional Cluster Services diff --git a/content/en/docs/concepts/cluster-administration/addons.md b/content/en/docs/concepts/cluster-administration/addons.md index 3e9cd316435f6..d6ef22e847a71 100644 --- a/content/en/docs/concepts/cluster-administration/addons.md +++ b/content/en/docs/concepts/cluster-administration/addons.md @@ -21,6 +21,7 @@ This page lists some of the available add-ons and links to their respective inst * [Canal](https://github.com/tigera/canal/tree/master/k8s-install) unites Flannel and Calico, providing networking and network policy. * [Cilium](https://github.com/cilium/cilium) is a L3 network and network policy plugin that can enforce HTTP/API/L7 policies transparently. Both routing and overlay/encapsulation mode are supported, and it can work on top of other CNI plugins. * [CNI-Genie](https://github.com/Huawei-PaaS/CNI-Genie) enables Kubernetes to seamlessly connect to a choice of CNI plugins, such as Calico, Canal, Flannel, or Weave. +* [Contiv](https://contivpp.io/) provides configurable networking (native L3 using BGP, overlay using vxlan, classic L2, and Cisco-SDN/ACI) for various use cases and a rich policy framework. Contiv project is fully [open sourced](https://github.com/contiv). The [installer](https://github.com/contiv/install) provides both kubeadm and non-kubeadm based installation options. * [Contrail](https://www.juniper.net/us/en/products-services/sdn/contrail/contrail-networking/), based on [Tungsten Fabric](https://tungsten.io), is an open source, multi-cloud network virtualization and policy management platform. Contrail and Tungsten Fabric are integrated with orchestration systems such as Kubernetes, OpenShift, OpenStack and Mesos, and provide isolation modes for virtual machines, containers/pods and bare metal workloads. * [Flannel](https://github.com/flannel-io/flannel#deploying-flannel-manually) is an overlay network provider that can be used with Kubernetes. * [Knitter](https://github.com/ZTE/Knitter/) is a plugin to support multiple network interfaces in a Kubernetes pod. diff --git a/content/en/docs/concepts/policy/resource-quotas.md b/content/en/docs/concepts/policy/resource-quotas.md index 0e88ca34335a0..58f0d9e465a48 100644 --- a/content/en/docs/concepts/policy/resource-quotas.md +++ b/content/en/docs/concepts/policy/resource-quotas.md @@ -22,8 +22,7 @@ be consumed by resources in that namespace. Resource quotas work like this: -- Different teams work in different namespaces. Currently this is voluntary, but - support for making this mandatory via ACLs is planned. +- Different teams work in different namespaces. This can be enforced with [RBAC](/docs/reference/access-authn-authz/rbac/). - The administrator creates one ResourceQuota for each namespace. diff --git a/content/en/docs/concepts/storage/persistent-volumes.md b/content/en/docs/concepts/storage/persistent-volumes.md index 074fe9f75903e..ccbbccf24276c 100644 --- a/content/en/docs/concepts/storage/persistent-volumes.md +++ b/content/en/docs/concepts/storage/persistent-volumes.md @@ -1021,7 +1021,7 @@ and need persistent storage, it is recommended that you use the following patter * Learn more about [Creating a PersistentVolume](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume). * Learn more about [Creating a PersistentVolumeClaim](/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolumeclaim). -* Read the [Persistent Storage design document](https://git.k8s.io/community/contributors/design-proposals/storage/persistent-storage.md). +* Read the [Persistent Storage design document](https://github.com/kubernetes/design-proposals-archive/blob/main/storage/persistent-storage.md). ### API references {#reference} diff --git a/content/en/docs/concepts/workloads/controllers/replicaset.md b/content/en/docs/concepts/workloads/controllers/replicaset.md index 29df406dcbe26..470a5e5024150 100644 --- a/content/en/docs/concepts/workloads/controllers/replicaset.md +++ b/content/en/docs/concepts/workloads/controllers/replicaset.md @@ -387,7 +387,7 @@ As such, it is recommended to use Deployments when you want ReplicaSets. ### Bare Pods -Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node (for example, Kubelet or Docker). +Unlike the case where a user directly created Pods, a ReplicaSet replaces Pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicaSet even if your application requires only a single Pod. Think of it similarly to a process supervisor, only it supervises multiple Pods across multiple nodes instead of individual processes on a single node. A ReplicaSet delegates local container restarts to some agent on the node such as Kubelet. ### Job diff --git a/content/en/docs/reference/access-authn-authz/_index.md b/content/en/docs/reference/access-authn-authz/_index.md index 86d06488a8742..3677f79c57149 100644 --- a/content/en/docs/reference/access-authn-authz/_index.md +++ b/content/en/docs/reference/access-authn-authz/_index.md @@ -24,3 +24,5 @@ Reference documentation: - Service accounts - [Developer guide](/docs/tasks/configure-pod-container/configure-service-account/) - [Administration](/docs/reference/access-authn-authz/service-accounts-admin/) +- [Kubelet Authentication & Authorization](/docs/reference/access-authn-authz/kubelet-authn-authz/) + - including kubelet [TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) diff --git a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md index b8a7faa946a7f..0d55d966a68f7 100644 --- a/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md +++ b/content/en/docs/reference/access-authn-authz/bootstrap-tokens.md @@ -15,7 +15,7 @@ creating new clusters or joining new nodes to an existing cluster. It was built to support [kubeadm](/docs/reference/setup-tools/kubeadm/), but can be used in other contexts for users that wish to start clusters without `kubeadm`. It is also built to work, via RBAC policy, with the -[Kubelet TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) system. +[Kubelet TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) system. diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md b/content/en/docs/reference/access-authn-authz/kubelet-authn-authz.md similarity index 100% rename from content/en/docs/reference/command-line-tools-reference/kubelet-authentication-authorization.md rename to content/en/docs/reference/access-authn-authz/kubelet-authn-authz.md diff --git a/content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md b/content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md similarity index 100% rename from content/en/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping.md rename to content/en/docs/reference/access-authn-authz/kubelet-tls-bootstrapping.md diff --git a/content/en/docs/reference/access-authn-authz/node.md b/content/en/docs/reference/access-authn-authz/node.md index 6e7c538eb01c7..bc9863219f7d5 100644 --- a/content/en/docs/reference/access-authn-authz/node.md +++ b/content/en/docs/reference/access-authn-authz/node.md @@ -43,7 +43,7 @@ have the minimal set of permissions required to operate correctly. In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:`. This group and user name format match the identity created for each kubelet as part of -[kubelet TLS bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/). +[kubelet TLS bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/). The value of `` **must** match precisely the name of the node as registered by the kubelet. By default, this is the host name as provided by `hostname`, or overridden via the [kubelet option](/docs/reference/command-line-tools-reference/kubelet/) `--hostname-override`. However, when using the `--cloud-provider` kubelet option, the specific hostname may be determined by the cloud provider, ignoring the local `hostname` and the `--hostname-override` option. For specifics about how the kubelet determines the hostname, see the [kubelet options reference](/docs/reference/command-line-tools-reference/kubelet/). diff --git a/content/en/docs/reference/access-authn-authz/rbac.md b/content/en/docs/reference/access-authn-authz/rbac.md index 57a074a29a47a..d085251e4337f 100644 --- a/content/en/docs/reference/access-authn-authz/rbac.md +++ b/content/en/docs/reference/access-authn-authz/rbac.md @@ -798,7 +798,7 @@ This is commonly used by add-on API servers for unified authentication and autho system:node-bootstrapper None Allows access to the resources required to perform -kubelet TLS bootstrapping. +kubelet TLS bootstrapping. system:node-problem-detector diff --git a/content/en/docs/reference/command-line-tools-reference/feature-gates.md b/content/en/docs/reference/command-line-tools-reference/feature-gates.md index 8eb23b3a473eb..549e3f248109d 100644 --- a/content/en/docs/reference/command-line-tools-reference/feature-gates.md +++ b/content/en/docs/reference/command-line-tools-reference/feature-gates.md @@ -1086,10 +1086,10 @@ Each feature gate is designed for enabling/disabling a specific feature: [Bound Service Account Tokens](https://github.com/kubernetes/enhancements/blob/master/keps/sig-auth/1205-bound-service-account-tokens/README.md) for more details. - `RotateKubeletClientCertificate`: Enable the rotation of the client TLS certificate on the kubelet. - See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) + See [kubelet configuration](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. - `RotateKubeletServerCertificate`: Enable the rotation of the server TLS certificate on the kubelet. - See [kubelet configuration](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubelet-configuration) + See [kubelet configuration](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration) for more details. - `RunAsGroup`: Enable control over the primary group ID set on the init processes of containers. diff --git a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md index 74428b914834e..0f4d373dbc1ae 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md +++ b/content/en/docs/reference/setup-tools/kubeadm/implementation-details.md @@ -219,7 +219,7 @@ Other API server flags that are set unconditionally are: - `--insecure-port=0` to avoid insecure connections to the api server - `--enable-bootstrap-token-auth=true` to enable the `BootstrapTokenAuthenticator` authentication module. - See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details + See [TLS Bootstrapping](/docs/reference/access-authn-authn/kubelet-tls-bootstrapping/) for more details - `--allow-privileged` to `true` (required e.g. by kube proxy) - `--requestheader-client-ca-file` to `front-proxy-ca.crt` - `--enable-admission-plugins` to: @@ -266,7 +266,7 @@ The static Pod manifest for the controller manager is affected by following para Other flags that are set unconditionally are: - `--controllers` enabling all the default controllers plus `BootstrapSigner` and `TokenCleaner` controllers for TLS bootstrap. - See [TLS Bootstrapping](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) for more details + See [TLS Bootstrapping](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) for more details - `--use-service-account-credentials` to `true` - Flags for using certificates generated in previous steps: - `--root-ca-file` to `ca.crt` diff --git a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md index fc87e796c2ed7..2abccfd8b63da 100644 --- a/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md +++ b/content/en/docs/reference/setup-tools/kubeadm/kubeadm-init.md @@ -52,7 +52,7 @@ following steps: 1. Makes all the necessary configurations for allowing node joining with the [Bootstrap Tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and - [TLS Bootstrap](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/) + [TLS Bootstrap](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/) mechanism: - Write a ConfigMap for making available all the information required diff --git a/content/en/docs/setup/best-practices/certificates.md b/content/en/docs/setup/best-practices/certificates.md index 6d6d576c39641..23e4ac8df70b4 100644 --- a/content/en/docs/setup/best-practices/certificates.md +++ b/content/en/docs/setup/best-practices/certificates.md @@ -22,7 +22,7 @@ This page explains the certificates that your cluster requires. Kubernetes requires PKI for the following operations: * Client certificates for the kubelet to authenticate to the API server -* Kubelet [server certificates](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#client-and-serving-certificates) +* Kubelet [server certificates](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#client-and-serving-certificates) for the API server to talk to the kubelets * Server certificate for the API server endpoint * Client certificates for administrators of the cluster to authenticate to the API server diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md index d9df7fb38c31b..97fa2bc352b36 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver.md @@ -68,9 +68,7 @@ and passing it to the local node kubelet. ## Using the `cgroupfs` driver -As this guide explains using the `cgroupfs` driver with kubeadm is not recommended. - -To continue using `cgroupfs` and to prevent `kubeadm upgrade` from modifying the +To use `cgroupfs` and to prevent `kubeadm upgrade` from modifying the `KubeletConfiguration` cgroup driver on existing setups, you must be explicit about its value. This applies to a case where you do not wish future versions of kubeadm to apply the `systemd` driver by default. diff --git a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md index 696a69ba828c8..f846f3c32d2b8 100644 --- a/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md +++ b/content/en/docs/tasks/administer-cluster/kubeadm/kubeadm-certs.md @@ -276,7 +276,7 @@ By default, these serving certificate will expire after one year. Kubeadm sets t `KubeletConfiguration` field `rotateCertificates` to `true`, which means that close to expiration a new set of CSRs for the serving certificates will be created and must be approved to complete the rotation. To understand more see -[Certificate Rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation). +[Certificate Rotation](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation). If you are looking for a solution for automatic approval of these CSRs it is recommended that you contact your cloud provider and ask if they have a CSR signer that verifies diff --git a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md index f125fd6cf2042..2be416ceb6a21 100644 --- a/content/en/docs/tasks/administer-cluster/securing-a-cluster.md +++ b/content/en/docs/tasks/administer-cluster/securing-a-cluster.md @@ -13,15 +13,10 @@ content_type: task This document covers topics related to protecting a cluster from accidental or malicious access and provides recommendations on overall security. - - ## {{% heading "prerequisites" %}} - * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} - - ## Controlling access to the Kubernetes API @@ -77,11 +72,13 @@ Consult the [authorization reference section](/docs/reference/access-authn-authz ## Controlling access to the Kubelet -Kubelets expose HTTPS endpoints which grant powerful control over the node and containers. By default Kubelets allow unauthenticated access to this API. +Kubelets expose HTTPS endpoints which grant powerful control over the node and containers. +By default Kubelets allow unauthenticated access to this API. Production clusters should enable Kubelet authentication and authorization. -Consult the [Kubelet authentication/authorization reference](/docs/reference/command-line-tools-reference/kubelet-authentication-authorization) for more information. +Consult the [Kubelet authentication/authorization reference](/docs/reference/access-authn-authz/kubelet-authn-authz/) +for more information. ## Controlling the capabilities of a workload or user at runtime diff --git a/content/en/docs/tasks/debug/debug-application/debug-pods.md b/content/en/docs/tasks/debug/debug-application/debug-pods.md index 77fff23a525aa..88a3be9beb4e2 100644 --- a/content/en/docs/tasks/debug/debug-application/debug-pods.md +++ b/content/en/docs/tasks/debug/debug-application/debug-pods.md @@ -156,5 +156,5 @@ to make sure that your `Service` is running, has `Endpoints`, and your `Pods` ar actually serving; you have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving. -You may also visit [troubleshooting document](/docs/tasks/debug/overview/) for more information. +You may also visit [troubleshooting document](/docs/tasks/debug/) for more information. diff --git a/content/en/docs/tasks/tls/certificate-rotation.md b/content/en/docs/tasks/tls/certificate-rotation.md index 2db0c1255daf9..8d1992845cdfa 100644 --- a/content/en/docs/tasks/tls/certificate-rotation.md +++ b/content/en/docs/tasks/tls/certificate-rotation.md @@ -28,7 +28,7 @@ default, these certificates are issued with one year expiration so that they do not need to be renewed too frequently. Kubernetes contains [kubelet certificate -rotation](/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/), +rotation](/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/), that will automatically generate a new key and request a new certificate from the Kubernetes API as the current certificate approaches expiration. Once the new certificate is available, it will be used for authenticating connections to diff --git a/content/zh/docs/concepts/security/rbac-good-practices.md b/content/zh/docs/concepts/security/rbac-good-practices.md new file mode 100644 index 0000000000000..07f54b9b326ce --- /dev/null +++ b/content/zh/docs/concepts/security/rbac-good-practices.md @@ -0,0 +1,355 @@ +--- +title: 基于角色的访问控制良好实践 +description: > + 为集群操作人员提供的良好的 RBAC 设计原则和实践。 +content_type: concept +--- + + + + + + + +Kubernetes {{< glossary_tooltip text="RBAC" term_id="rbac" >}} +是一项重要的安全控制措施,用于保证集群用户和工作负载只能访问履行自身角色所需的资源。 +在为集群用户设计权限时,请务必确保集群管理员知道可能发生特权提级的地方, +降低因过多权限而导致安全事件的风险。 + +此文档的良好实践应该与通用 +[RBAC 文档](/zh/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update)一起阅读。 + + + + +## 通用的良好实践 {#general-good-practice} + +### 最小特权 {#least-privilege} + + +理想情况下,分配给用户和服务帐户的 RBAC 权限应该是最小的。 +仅应使用操作明确需要的权限,虽然每个集群会有所不同,但可以应用的一些常规规则: + + +- 尽可能在命名空间级别分配权限。授予用户在特定命名空间中的权限时使用 RoleBinding + 而不是 ClusterRoleBinding。 +- 尽可能避免通过通配符设置权限,尤其是对所有资源的权限。 + 由于 Kubernetes 是一个可扩展的系统,因此通过通配符来授予访问权限不仅会授予集群中当前的所有对象类型, + 还包含所有未来被创建的所有对象类型。 +- 管理员不应使用 `cluster-admin` 账号,除非特别需要。为低特权帐户提供 + [伪装权限](/zh/docs/reference/access-authn-authz/authentication/#user-impersonation) + 可以避免意外修改集群资源。 +- 避免将用户添加到 `system:masters` 组。任何属于此组成员的用户都会绕过所有 RBAC 权限检查, + 始终具有不受限制的超级用户访问权限,并且不能通过删除 `RoleBinding` 或 `ClusterRoleBinding` + 来取消其权限。顺便说一句,如果集群是使用 Webhook 鉴权,此组的成员身份也会绕过该 + Webhook(来自属于该组成员的用户的请求永远不会发送到 Webhook)。 + + +### 最大限度地减少特权令牌的分发 {#minimize-distribution-of-privileged-tokens} + + +理想情况下,不应为 Pod 分配具有强大权限(例如,在[特权提级的风险](#privilege-escalation-risks)中列出的任一权限)的服务帐户。 +如果工作负载需要比较大的权限,请考虑以下做法: +- 限制运行此类 Pod 的节点数量。确保你运行的任何 DaemonSet 都是必需的, + 并且以最小权限运行,以限制容器逃逸的影响范围。 +- 避免将此类 Pod 与不可信任或公开的 Pod 在一起运行。 + 考虑使用[污点和容忍度](/zh/docs/concepts/scheduling-eviction/taint-and-toleration/)、 + [节点亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)或 + [Pod 反亲和性](/zh/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity)确保 + Pod 不会与不可信或不太受信任的 Pod 一起运行。 + 特别注意可信度不高的 Pod 不符合 **Restricted** Pod 安全标准的情况。 + +### 加固 {#hardening} + +Kubernetes 默认提供访问权限并非是每个集群都需要的。 +审查默认提供的 RBAC 权限为安全加固提供了机会。 +一般来说,不应该更改 `system:` 帐户的某些权限,有一些方式来强化现有集群的权限: + + +- 审查 `system:unauthenticated` 组的绑定,并在可能的情况下将其删除, + 因为这会给所有能够访问 API 服务器的人以网络级别的权限。 +- 通过设置 `automountServiceAccountToken: false` 来避免服务账号令牌的默认自动挂载, + 有关更多详细信息,请参阅[使用默认服务账号令牌](/zh/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server)。 + 此参数可覆盖 Pod 服务账号设置,而需要服务账号令牌的工作负载仍可以挂载。 + + +### 定期检查 {#periodic-review} +定期检查 Kubernetes RBAC 设置是否有冗余条目和提权可能性是至关重要的。 +如果攻击者能够创建与已删除用户同名的用户账号, +他们可以自动继承被删除用户的所有权限,尤其是分配给该用户的权限。 + + +## Kubernetes RBAC - 权限提权的风险 {#privilege-escalation-risks} + +在 Kubernetes RBAC 中有许多特权,如果被授予, +用户或服务帐户可以提升其在集群中的权限并可能影响集群外的系统。 + +本节旨在提醒集群操作员需要注意的不同领域, +以确保他们不会无意中授予超出预期的集群访问权限。 + + +### 列举 Secret {#listing-secrets} + +大家都很清楚,若允许对 Secrets 执行 `get` 访问,用户就获得了访问 Secret 内容的能力。 +同样需要注意的是:`list` 和 `watch` 访问也会授权用户获取 Secret 的内容。 +例如,当返回 List 响应时(例如,通过 +`kubectl get secrets -A -o yaml`),响应包含所有 Secret 的内容。 + + +### 工作负载的创建 {#workload-creation} + +能够创建工作负载的用户(Pod 或管理 Pod 的[工作负载资源](/zh/docs/concepts/workloads/controllers/)) +能够访问下层的节点,除非基于 Kubernetes 的 +[Pod 安全标准](/zh/docs/concepts/security/pod-security-standards/)做限制。 + + +可以运行特权 Pod 的用户可以利用该访问权限获得节点访问权限, +并可能进一步提升他们的特权。如果你不完全信任某用户或其他主体, +不相信他们能够创建比较安全且相互隔离的 Pod,你应该强制实施 **Baseline** +或 **Restricted** Pod 安全标准。 +你可以使用 [Pod 安全性准入](/zh/docs/concepts/security/pod-security-admission/)或其他(第三方)机制来强制实施这些限制。 + + +你还可以使用已弃用的 [PodSecurityPolicy](/zh/docs/concepts/policy/pod-security-policy/) +机制以限制用户创建特权 Pod 的能力 (特别注意:PodSecurityPolicy 已计划在版本 1.25 中删除)。 + + +在命名空间中创建工作负载还会授予对该命名空间中 Secret 的间接访问权限。 +在 kube-system 或类似特权的命名空间中创建 Pod +可以授予用户不需要通过 RBAC 即可获取 Secret 访问权限。 + + +### 持久卷的创建 {#persistent-volume-creation} + +如 [PodSecurityPolicy](/zh/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems) +文档中所述,创建 PersistentVolumes 的权限可以提权访问底层主机。 +如果需要访问 PersistentVolume,受信任的管理员应该创建 `PersistentVolume`, +受约束的用户应该使用 `PersistentVolumeClaim` 访问该存储。 + + +### 访问 Node 的 `proxy` 子资源 {#access-to-proxy-subresource-of-nodes} + +有权访问 Node 对象的 proxy 子资源的用户有权访问 Kubelet API, +这允许在他们有权访问的节点上的所有 Pod 上执行命令。 +此访问绕过审计日志记录和准入控制,因此在授予对此资源的权限前应小心。 + + +### esclate 动词 {#escalate-verb} +通常,RBAC 系统会阻止用户创建比他所拥有的更多权限的 `ClusterRole`。 +而 `escalate` 动词是个例外。如 +[RBAC 文档](/zh/docs/reference/access-authn-authz/rbac/#restrictions-on-role-creation-or-update) +中所述,拥有此权限的用户可以有效地提升他们的权限。 + + +### bind 动词 {#bind-verb} + +与 `escalate` 动作类似,授予此权限的用户可以绕过 Kubernetes +对权限提升的内置保护,用户可以创建并绑定尚不具有的权限的角色。 + + +### impersonate 动词 {#impersonate-verb} + +此动词允许用户伪装并获得集群中其他用户的权限。 +授予它时应小心,以确保通过其中一个伪装账号不会获得过多的权限。 + + +### CSR 和证书颁发 {#csrs-and-certificate-issuing} + +CSR API 允许用户拥有 `create` CSR 的权限和 `update` +`certificatesigningrequests/approval` 的权限, +其中签名者是 `kubernetes.io/kube-apiserver-client`, +通过此签名创建的客户端证书允许用户向集群进行身份验证。 +这些客户端证书可以包含任意的名称,包括 Kubernetes 系统组件的副本。 +这将有利于特权提级。 + + +### 令牌请求 {#token-request} + +拥有 `serviceaccounts/token` 的 `create` 权限的用户可以创建 +TokenRequest 来发布现有服务帐户的令牌。 + + +### 控制准入 Webhook {#control-admission-webhooks} + +可以控制 `validatingwebhookconfigurations` 或 `mutatingwebhookconfigurations` +的用户可以控制能读取任何允许进入集群的对象的 webhook, +并且在有变更 webhook 的情况下,还可以变更准入的对象。 + + +## Kubernetes RBAC - 拒绝服务攻击的风险 {#denial-of-service-risks} + +### 对象创建拒绝服务 {#object-creation-dos} +有权在集群中创建对象的用户根据创建对象的大小和数量可能会创建足够大的对象, +产生拒绝服务状况,如 [Kubernetes 使用的 etcd 容易受到 OOM 攻击](https://github.com/kubernetes/kubernetes/issues/107325)中的讨论。 +允许太不受信任或者不受信任的用户对系统进行有限的访问在多租户集群中是特别重要的。 + +缓解此问题的一种选择是使用[资源配额](/zh/docs/concepts/policy/resource-quotas/#object-count-quota)以限制可以创建的对象数量。 \ No newline at end of file diff --git a/content/zh/docs/concepts/storage/windows-storage.md b/content/zh/docs/concepts/storage/windows-storage.md new file mode 100644 index 0000000000000..142fadae0cb04 --- /dev/null +++ b/content/zh/docs/concepts/storage/windows-storage.md @@ -0,0 +1,132 @@ +--- +title: Windows 存储 +content_type: concept +--- + + + + +此页面提供特定于 Windows 操作系统的存储概述。 + + + +## 持久存储 {#storage} +Windows 有一个分层文件系统驱动程序用来挂载容器层和创建基于 NTFS 的文件系统拷贝。 +容器中的所有文件路径仅在该容器的上下文中解析。 + + +* 使用 Docker 时,卷挂载只能是容器中的目录,而不能是单个文件。此限制不适用于 containerd。 +* 卷挂载不能将文件或目录映射回宿主文件系统。 +* 不支持只读文件系统,因为 Windows 注册表和 SAM 数据库始终需要写访问权限。不过,Windows 支持只读的卷。 +* 不支持卷的用户掩码和访问许可,因为宿主与容器之间并不共享 SAM,二者之间不存在映射关系。 + 所有访问许可都是在容器上下文中解析的。 + + +因此,Windows 节点不支持以下存储功能: + + +* 卷子路径挂载:只能在 Windows 容器上挂载整个卷 +* Secret 的子路径挂载 +* 宿主挂载映射 +* 只读的根文件系统(映射的卷仍然支持 `readOnly`) +* 块设备映射 +* 内存作为存储介质(例如 `emptyDir.medium` 设置为 `Memory`) +* 类似 UID/GID、各用户不同的 Linux 文件系统访问许可等文件系统特性 +* 使用 [DefaultMode 设置 Secret 权限](/zh/docs/concepts/configuration/secret/#secret-files-permissions) + (因为该特性依赖 UID/GID) +* 基于 NFS 的存储和卷支持 +* 扩展已挂载卷(resizefs) + + +使用 Kubernetes {{< glossary_tooltip text="卷" term_id="volume" >}}, +对数据持久性和 Pod 卷共享有需求的复杂应用也可以部署到 Kubernetes 上。 +管理与特定存储后端或协议相关的持久卷时,相关的操作包括:对卷的制备(Provisioning)、 +去配(De-provisioning)和调整大小,将卷挂接到 Kubernetes 节点或从节点上解除挂接, +将卷挂载到需要持久数据的 Pod 中的某容器上或从容器上卸载。 + + +卷管理组件作为 Kubernetes 卷[插件](/zh/docs/concepts/storage/volumes/#types-of-volumes)发布。 +Windows 支持以下类型的 Kubernetes 卷插件: + + +* [`FlexVolume plugins`](/zh/docs/concepts/storage/volumes/#flexVolume) + * 请注意自 1.23 版本起,FlexVolume 已被弃用 +* [`CSI Plugins`](/zh/docs/concepts/storage/volumes/#csi) + + +##### 树内(In-Tree)卷插件 {#in-tree-volume-plugins} + +以下树内(In-Tree)插件支持 Windows 节点上的持久存储: + + +* [`awsElasticBlockStore`](/zh/docs/concepts/storage/volumes/#awselasticblockstore) +* [`azureDisk`](/zh/docs/concepts/storage/volumes/#azuredisk) +* [`azureFile`](/zh/docs/concepts/storage/volumes/#azurefile) +* [`gcePersistentDisk`](/zh/docs/concepts/storage/volumes/#gcepersistentdisk) +* [`vsphereVolume`](/zh/docs/concepts/storage/volumes/#vspherevolume) \ No newline at end of file diff --git a/content/zh/docs/concepts/windows/intro.md b/content/zh/docs/concepts/windows/intro.md new file mode 100644 index 0000000000000..f03ca6b0b1645 --- /dev/null +++ b/content/zh/docs/concepts/windows/intro.md @@ -0,0 +1,716 @@ +--- +title: Kubernetes 中的 Windows 容器 +content_type: concept +weight: 65 +--- + + + + +在许多组织中,所运行的很大一部分服务和应用是 Windows 应用。 +[Windows 容器](https://aka.ms/windowscontainers)提供了一种封装进程和包依赖项的方式, +从而简化了 DevOps 实践,令 Windows 应用程序同样遵从云原生模式。 + +对于同时投入基于 Windows 应用和 Linux 应用的组织而言,他们不必寻找不同的编排系统来管理其工作负载, +使其跨部署的运营效率得以大幅提升,而不必关心所用的操作系统。 + + + + +## Kubernetes 中的 Windows 节点 {#windows-nodes-in-k8s} + +若要在 Kubernetes 中启用对 Windows 容器的编排,可以在现有的 Linux 集群中包含 Windows 节点。 +在 Kubernetes 上调度 {{< glossary_tooltip text="Pod" term_id="pod" >}} 中的 Windows 容器与调度基于 Linux 的容器类似。 + +为了运行 Windows 容器,你的 Kubernetes 集群必须包含多个操作系统。 +尽管你只能在 Linux 上运行{{< glossary_tooltip text="控制平面" term_id="control-plane" >}}, +你可以部署运行 Windows 或 Linux 的工作节点。 + + +支持 Windows {{< glossary_tooltip text="节点" term_id="node" >}}的前提是操作系统为 Windows Server 2019。 + +本文使用术语 **Windows 容器**表示具有进程隔离能力的 Windows 容器。 +Kubernetes 不支持使用 +[Hyper-V 隔离能力](https://docs.microsoft.com/zh-cn/virtualization/windowscontainers/manage-containers/hyperv-container)来运行 +Windows 容器。 + + +## 兼容性与局限性 {#limitations} + +某些节点层面的功能特性仅在使用特定[容器运行时](#container-runtime)时才可用; +另外一些特性则在 Windows 节点上不可用,包括: + +* 巨页(HugePages):Windows 容器当前不支持 +* 特权容器:Windows 容器当前不支持 +* TerminationGracePeriod:需要 containerD + + +Windows 节点并不支持共享命名空间的所有功能特性。 +有关更多详细信息,请参考 [API 兼容性](#api)。 + +有关 Kubernetes 测试时所使用的 Windows 版本的详细信息,请参考 [Windows 操作系统版本兼容性](#windows-os-version-support)。 + +从 API 和 kubectl 的角度来看,Windows 容器的行为与基于 Linux 的容器非常相似。 +然而,在本节所概述的一些关键功能上,二者存在一些显著差异。 + + +### 与 Linux 比较 {#comparison-with-Linux-similarities} + +Kubernetes 关键组件在 Windows 上的工作方式与在 Linux 上相同。 +本节介绍几个关键的工作负载抽象及其如何映射到 Windows。 + + +* [Pod](/zh/docs/concepts/workloads/pods/) + + Pod 是 Kubernetes 的基本构建块,是可以创建或部署的最小和最简单的单元。 + 你不可以在同一个 Pod 中部署 Windows 和 Linux 容器。 + Pod 中的所有容器都调度到同一 Node 上,每个 Node 代表一个特定的平台和体系结构。 + Windows 容器支持以下 Pod 能力、属性和事件: + + * 每个 Pod 有一个或多个容器,具有进程隔离和卷共享能力 + * Pod `status` 字段 + * Readiness 和 Liveness 探针 + * postStart 和 preStop 容器生命周期回调 + * ConfigMap 和 Secret:作为环境变量或卷 + * `emptyDir` 卷 + * 命名管道形式的主机挂载 + * 资源限制 + * 操作系统字段: + + `.spec.os.name` 字段应设置为 `windows` 以表明当前 Pod 使用 Windows 容器。 + 需要启用 `IdentifyPodOS` 特性门控才能让这个字段被识别。 + + {{< note >}} + 从 1.24 开始,`IdentifyPodOS` 特性门控进入 Beta 阶段,默认启用。 + {{< /note >}} + + 如果 `IdentifyPodOS` 特性门控已启用并且你将 `.spec.os.name` 字段设置为 `windows`, + 则你不得在对应 Pod 的 `.spec` 中设置以下字段: + + * `spec.hostPID` + * `spec.hostIPC` + * `spec.securityContext.seLinuxOptions` + * `spec.securityContext.seccompProfile` + * `spec.securityContext.fsGroup` + * `spec.securityContext.fsGroupChangePolicy` + * `spec.securityContext.sysctls` + * `spec.shareProcessNamespace` + * `spec.securityContext.runAsUser` + * `spec.securityContext.runAsGroup` + * `spec.securityContext.supplementalGroups` + * `spec.containers[*].securityContext.seLinuxOptions` + * `spec.containers[*].securityContext.seccompProfile` + * `spec.containers[*].securityContext.capabilities` + * `spec.containers[*].securityContext.readOnlyRootFilesystem` + * `spec.containers[*].securityContext.privileged` + * `spec.containers[*].securityContext.allowPrivilegeEscalation` + * `spec.containers[*].securityContext.procMount` + * `spec.containers[*].securityContext.runAsUser` + * `spec.containers[*].securityContext.runAsGroup` + + 在上述列表中,通配符(`*`)表示列表中的所有项。 + 例如,`spec.containers[*].securityContext` 指代所有容器的 SecurityContext 对象。 + 如果指定了这些字段中的任意一个,则 API 服务器不会接受此 Pod。 + + +* [工作负载资源](/zh/docs/concepts/workloads/controllers/)包括: + + * ReplicaSet + * Deployment + * StatefulSet + * DaemonSet + * Job + * CronJob + * ReplicationController + +* {{< glossary_tooltip text="Services" term_id="service" >}} + + 有关更多详细信息,请参考[负载均衡和 Service](#load-balancing-and-services)。 + + +Pod、工作负载资源和 Service 是在 Kubernetes 上管理 Windows 工作负载的关键元素。 +然而,它们本身还不足以在动态的云原生环境中对 Windows 工作负载进行恰当的生命周期管理。 +Kubernetes 还支持: + +* `kubectl exec` +* Pod 和容器度量指标 +* {{< glossary_tooltip text="Pod 水平自动扩缩容" term_id="horizontal-pod-autoscaler" >}} +* {{< glossary_tooltip text="资源配额" term_id="resource-quota" >}} +* 调度器抢占 + + +### kubelet 的命令行选项 {#kubelet-compatibility} + +某些 kubelet 命令行选项在 Windows 上的行为不同,如下所述: + + +* `--windows-priorityclass` 允许你设置 kubelet 进程的调度优先级 + (参考 [CPU 资源管理](/zh/docs/concepts/configuration/windows-resource-management/#resource-management-cpu))。 +* `--kubelet-reserve`、`--system-reserve` 和 `--eviction-hard` 标志更新 + [NodeAllocatable](/zh/docs/tasks/administer-cluster/reserve-compute-resources/#node-allocatable)。 +* 未实现使用 `--enforce-node-allocable` 驱逐。 +* 未实现使用 `--eviction-hard` 和 `--eviction-soft` 驱逐。 +* 在 Windows 节点上运行的 kubelet 没有内存限制。 + `--kubelet-reserve` 和 `--system-reserve` 没有对主机上运行的 kubelet 或进程设置限制。 + 这意味着 kubelet 或主机上的进程使用的内存可能会超过节点可分配内存和调度器设定的内存, + 从而造成内存资源不足。 +* 未实现 `MemoryPressure` 条件。 +* kubelet 不会执行 OOM 驱逐操作。 + + +### API 兼容性 {#api} + +由于操作系统和容器运行时的缘故,Kubernetes API 在 Windows 上的工作方式存在细微差异。 +某些工作负载属性是为 Linux 设计的,无法在 Windows 上运行。 + +从较高的层面来看,以下操作系统概念是不同的: + + +* 身份 - Linux 使用 userID(UID)和 groupID(GID),表示为整数类型。 + 用户名和组名是不规范的,它们只是 `/etc/groups` 或 `/etc/passwd` 中的别名, + 作为 UID+GID 的后备标识。 + Windows 使用更大的二进制[安全标识符](https://docs.microsoft.com/zh-cn/windows/security/identity-protection/access-control/security-identifiers)(SID), + 存放在 Windows 安全访问管理器(Security Access Manager,SAM)数据库中。 + 此数据库在主机和容器之间或容器之间不共享。 +* 文件权限 - Windows 使用基于 SID 的访问控制列表, + 而像 Linux 使用基于对象权限和 UID+GID 的位掩码(POSIX 系统)以及**可选的**访问控制列表。 +* 文件路径 - Windows 上的约定是使用 `\` 而不是 `/`。 + Go IO 库通常接受两者,能让其正常工作,但当你设置要在容器内解读的路径或命令行时, + 可能需要用 `\`。 + + +* 信号 - Windows 交互式应用处理终止的方式不同,可以实现以下一种或多种: + * UI 线程处理包括 `WM_CLOSE` 在内准确定义的消息。 + * 控制台应用使用控制处理程序(Control Handler)处理 Ctrl-C 或 Ctrl-Break。 + * 服务会注册可接受 `SERVICE_CONTROL_STOP` 控制码的服务控制处理程序(Service Control Handler)函数。 + +容器退出码遵循相同的约定,其中 0 表示成功,非零表示失败。 +具体的错误码在 Windows 和 Linux 中可能不同。 +但是,从 Kubernetes 组件(kubelet、kube-proxy)传递的退出码保持不变。 + + +##### 容器规范的字段兼容性 {#compatibility-v1-pod-spec-containers} + +以下列表记录了 Pod 容器规范在 Windows 和 Linux 之间的工作方式差异: + +* 巨页(Huge page)在 Windows 容器运行时中未实现,且不可用。 + 巨页需要不可为容器配置的[用户特权生效](https://docs.microsoft.com/zh-cn/windows/win32/memory/large-page-support)。 +* `requests.cpu` 和 `requests.memory` - + 从节点可用资源中减去请求,因此请求可用于避免一个节点过量供应。 + 但是,请求不能用于保证已过量供应的节点中的资源。 + 如果运营商想要完全避免过量供应,则应将设置请求作为最佳实践应用到所有容器。 + +* `securityContext.allowPrivilegeEscalation` - + 不能在 Windows 上使用;所有权能字都无法生效。 +* `securityContext.capabilities` - POSIX 权能未在 Windows 上实现。 +* `securityContext.privileged` - Windows 不支持特权容器。 +* `securityContext.procMount` - Windows 没有 `/proc` 文件系统。 +* `securityContext.readOnlyRootFilesystem` - + 不能在 Windows 上使用;对于容器内运行的注册表和系统进程,写入权限是必需的。 +* `securityContext.runAsGroup` - 不能在 Windows 上使用,因为不支持 GID。 + +* `securityContext.runAsNonRoot` - + 此设置将阻止以 `ContainerAdministrator` 身份运行容器,这是 Windows 上与 root 用户最接近的身份。 +* `securityContext.runAsUser` - 改用 [`runAsUserName`](/zh/docs/tasks/configure-pod-container/configure-runasusername)。 +* `securityContext.seLinuxOptions` - 不能在 Windows 上使用,因为 SELinux 特定于 Linux。 +* `terminationMessagePath` - 这个字段有一些限制,因为 Windows 不支持映射单个文件。 + 默认值为 `/dev/termination-log`,因为默认情况下它在 Windows 上不存在,所以能生效。 + + +##### Pod 规范的字段兼容性 {#compatibility-v1-pod} + +以下列表记录了 Pod 规范在 Windows 和 Linux 之间的工作方式差异: + +* `hostIPC` 和 `hostpid` - 不能在 Windows 上共享主机命名空间。 +* `hostNetwork` - Windows 操作系统不支持共享主机网络。 +* `dnsPolicy` - Windows 不支持将 Pod `dnsPolicy` 设为 `ClusterFirstWithHostNet`, + 因为未提供主机网络。Pod 始终用容器网络运行。 +* `podSecurityContext`(参见下文) +* `shareProcessNamespace` - 这是一个 beta 版功能特性,依赖于 Windows 上未实现的 Linux 命名空间。 + Windows 无法共享进程命名空间或容器的根文件系统(root filesystem)。 + 只能共享网络。 + +* `terminationGracePeriodSeconds` - 这在 Windows 上的 Docker 中没有完全实现, + 请参考[GitHub issue](https://github.com/moby/moby/issues/25982)。 + 目前的行为是通过 CTRL_SHUTDOWN_EVENT 发送 ENTRYPOINT 进程,然后 Windows 默认等待 5 秒, + 最后使用正常的 Windows 关机行为终止所有进程。 + 5 秒默认值实际上位于[容器内](https://github.com/moby/moby/issues/25982#issuecomment-426441183)的 Windows 注册表中, + 因此在构建容器时可以覆盖这个值。 +* `volumeDevices` - 这是一个 beta 版功能特性,未在 Windows 上实现。 + Windows 无法将原始块设备挂接到 Pod。 +* `volumes` + * 如果你定义一个 `emptyDir` 卷,则你无法将卷源设为 `memory`。 +* 你无法为卷挂载启用 `mountPropagation`,因为这在 Windows 上不支持。 + + +##### Pod 安全上下文的字段兼容性 {#compatibility-v1-pod-spec-containers-securitycontext} + +Pod 的所有 [`securityContext`](/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) +字段都无法在 Windows 上生效。 + + +### 节点问题检测器 {#node-problem-detector} + +节点问题检测器(参考[节点健康监测](/zh/docs/tasks/debug/debug-cluster/monitor-node-health/))与 Windows 不兼容。 + + +### Pause 容器 {#pause-container} + +在 Kubernetes Pod 中,首先创建一个基础容器或 “pause” 容器来承载容器。 +在 Linux 中,构成 Pod 的 cgroup 和命名空间维持持续存在需要一个进程; +而 pause 进程就提供了这个功能。 +属于同一 Pod 的容器(包括基础容器和工作容器)共享一个公共网络端点 +(相同的 IPv4 和/或 IPv6 地址,相同的网络端口空间)。 +Kubernetes 使用 pause 容器以允许工作容器崩溃或重启,而不会丢失任何网络配置。 + + +Kubernetes 维护一个多体系结构的镜像,包括对 Windows 的支持。 +对于 Kubernetes v{{< skew currentVersion >}},推荐的 pause 镜像为 `k8s.gcr.io/pause:3.6`。 +可在 GitHub 上获得[源代码](https://github.com/kubernetes/kubernetes/tree/master/build/pause)。 + +Microsoft 维护一个不同的多体系结构镜像,支持 Linux 和 Windows amd64, +你可以找到的镜像类似 `mcr.microsoft.com/oss/kubernetes/pause:3.6`。 +此镜像的构建与 Kubernetes 维护的镜像同源,但所有 Windows 可执行文件均由 +Microsoft 进行了[验证码签名](https://docs.microsoft.com/zh-cn/windows-hardware/drivers/install/authenticode)。 +如果你正部署到一个需要签名可执行文件的生产或类生产环境, +Kubernetes 项目建议使用 Microsoft 维护的镜像。 + + +### 容器运行时 {#container-runtime} + +你需要将{{< glossary_tooltip text="容器运行时" term_id="container-runtime" >}}安装到集群中的每个节点, +这样 Pod 才能在这些节点上运行。 + +以下容器运行时适用于 Windows: + +{{% thirdparty-content %}} + + +#### cri-containerd {#cri-containerd} + +{{< feature-state for_k8s_version="v1.20" state="stable" >}} + +对于运行 Windows 的 Kubernetes 节点,你可以使用 +{{< glossary_tooltip term_id="containerd" text="ContainerD" >}} 1.4.0+ 作为容器运行时。 + +学习如何[在 Windows 上安装 ContainerD](/zh/docs/setup/production-environment/container-runtimes/#install-containerd)。 + + +{{< note >}} +将 GMSA 和 containerd 一起用于访问 Windows +网络共享时存在[已知限制](/zh/docs/tasks/configure-pod-container/configure-gmsa/#gmsa-limitations), +这需要一个内核补丁。 +{{< /note >}} + + +#### Mirantis 容器运行时 {#mcr} + +[Mirantis 容器运行时](https://docs.mirantis.com/mcr/20.10/overview.html)(MCR) +可作为所有 Windows Server 2019 和更高版本的容器运行时。 + +有关更多信息,请参考[在 Windows Server 上安装 MCR](https://docs.mirantis.com/mcr/20.10/install/mcr-windows.html)。 + + +## Windows 操作系统版本兼容性 {#windows-os-version-support} + +在 Windows 节点上,如果主机操作系统版本必须与容器基础镜像操作系统版本匹配, +则会应用严格的兼容性规则。 +仅 Windows Server 2019 作为容器操作系统时,才能完全支持 Windows 容器。 + +对于 Kubernetes v{{< skew currentVersion >}},Windows 节点(和 Pod)的操作系统兼容性如下: + +Windows Server LTSC release +: Windows Server 2019 +: Windows Server 2022 + +Windows Server SAC release +: Windows Server version 20H2 + + +也适用 Kubernetes [版本偏差策略](/zh/docs/setup/release/version-skew-policy/)。 + + +## 获取帮助和故障排查 {#troubleshooting} + +对 Kubernetes 集群进行故障排查的主要帮助来源应始于[故障排查](/zh/docs/tasks/debug/)页面。 + +本节包括了一些其他特定于 Windows 的故障排查帮助。 +日志是解决 Kubernetes 中问题的重要元素。 +确保在任何时候向其他贡献者寻求故障排查协助时随附了日志信息。 +遵照 SIG Windows +[日志收集贡献指南](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#gathering-logs)中的指示说明。 + + +### 报告问题和功能请求 {#report-issue-and-feature-request} + +如果你发现疑似 bug,或者你想提出功能请求,请按照 +[SIG Windows 贡献指南](https://github.com/kubernetes/community/blob/master/sig-windows/CONTRIBUTING.md#reporting-issues-and-feature-requests) +新建一个 Issue。 +您应该先搜索 issue 列表,以防之前报告过这个问题,凭你对该问题的经验添加评论, +并随附日志信息。 +SIG Windows Slack 也是一个很好的途径,让你在创建工单之前获得一些初始支持和故障排查的思路。 + +## {{% heading "whatsnext" %}} + + +### 部署工具 {#deployment-tools} + +kubeadm 工具帮助你部署 Kubernetes 集群,提供管理集群的控制平面以及运行工作负载的节点。 +[添加 Windows 节点](/zh/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/)阐述了如何使用 +kubeadm 将 Windows 节点部署到你的集群。 + +Kubernetes [集群 API](https://cluster-api.sigs.k8s.io/) 项目也提供了自动部署 Windows 节点的方式。 + + +### Windows 分发渠道 {#windows-distribution-channels} + +有关 Windows 分发渠道的详细阐述,请参考 +[Microsoft 文档](https://docs.microsoft.com/zh-cn/windows-server/get-started-19/servicing-channels-19)。 + +有关支持模型在内的不同 Windows Server 服务渠道的信息,请参考 +[Windows Server 服务渠道](https://docs.microsoft.com/zh-cn/windows-server/get-started/servicing-channels-comparison)。 diff --git a/content/zh/docs/contribute/participate/pr-wranglers.md b/content/zh/docs/contribute/participate/pr-wranglers.md index 20230e4d3b473..b2d9748839062 100644 --- a/content/zh/docs/contribute/participate/pr-wranglers.md +++ b/content/zh/docs/contribute/participate/pr-wranglers.md @@ -179,10 +179,10 @@ To close a pull request, leave a `/close` comment on the PR. 要关闭 PR,请在 PR 上输入 `/close` 评论。 {{< note >}} -一个名为 [`fejta-bot`](https://github.com/fejta-bot) 的自动服务会在 Issue 停滞 90 +一个名为 [`k8s-ci-robot`](https://github.com/k8s-ci-robot) 的自动服务会在 Issue 停滞 90 天后自动将其标记为过期;然后再等 30 天,如果仍然无人过问,则将其关闭。 PR 管理者应该在 issues 处于无人过问状态 14-30 天后关闭它们。 {{< /note >}} diff --git a/static/_redirects b/static/_redirects index 3ed31a7b3f970..0e9eb3b1f33c7 100644 --- a/static/_redirects +++ b/static/_redirects @@ -41,8 +41,8 @@ /docs/admin/ha-master-gce/ /docs/setup/production-environment/#production-control-plane 301 /docs/admin/ha-master-gce.md/ /docs/setup/production-environment/#production-control-plane 301 /docs/admin/high-availability/ /docs/setup/production-environment/tools/kubeadm/high-availability/ 301 -/docs/admin/kubelet-authentication-authorization/ /docs/reference/command-line-tools-reference/kubelet-authentication-authorization/ 301 -/docs/admin/kubelet-tls-bootstrapping/ /docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/ 301 +/docs/admin/kubelet-authentication-authorization/ /docs/reference/access-authn-authz/kubelet-authn-authz/ 301 +/docs/admin/kubelet-tls-bootstrapping/ /docs/reference/access-authn-authz/kubelet-tls-bootstrapping/ 301 /docs/admin/limitrange/ /docs/tasks/administer-cluster/cpu-memory-limit/ 301 /docs/admin/limitrange/Limits/ /docs/tasks/administer-cluster/limit-storage-consumption/#limitrange-to-limit-requests-for-storage/ 301 /docs/admin/master-node-communication/ /docs/concepts/architecture/master-node-communication/ 301 @@ -252,7 +252,7 @@ /docs/tasks/administer-cluster/apply-resource-quota-limit/ /docs/tasks/administer-cluster/quota-api-object/ 301 /docs/tasks/administer-cluster/assign-pods-nodes/ /docs/tasks/configure-pod-container/assign-pods-nodes/ 301 /docs/tasks/administer-cluster/calico-network-policy/ /docs/tasks/administer-cluster/network-policy-provider/calico-network-policy/ 301 -/docs/tasks/administer-cluster/certificate-rotation/ /docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/ 301 +/docs/tasks/administer-cluster/certificate-rotation/ /docs/reference/access-authn-authz/kubelet-tls-bootstrapping/ 301 /docs/tasks/administer-cluster/cilium-network-policy/ /docs/tasks/administer-cluster/network-policy-provider/cilium-network-policy/ 301 /docs/tasks/administer-cluster/configure-namespace-isolation/ /docs/concepts/services-networking/network-policies/ 301 /docs/tasks/administer-cluster/configure-multiple-schedulers/ /docs/tasks/extend-kubernetes/configure-multiple-schedulers/ 301