From 006789e2e28eeb33c35e56c567c2aa3d888dbcc0 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 9 Sep 2020 10:52:06 -0700 Subject: [PATCH 01/26] add install instructions --- content/en/agent/kubernetes/_index.md | 58 +++++++++++++++++++++++++-- 1 file changed, 54 insertions(+), 4 deletions(-) diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index fc9027fb88000..343a7838bde83 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -174,11 +174,61 @@ To install the Datadog Agent on your Kubernetes cluster: {{% /tab %}} {{% tab "Operator" %}} -[The Datadog Operator][1] is in public beta. The Datadog Operator is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. To get started, check out the [Getting Started page][2] in the [Datadog Operator repo][1] or install the operator from the [OperatorHub.io Datadog Operator page][3]. +[The Datadog Operator][1] is in public beta. The Datadog Operator is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. -[1]: https://github.com/DataDog/datadog-operator/blob/master/docs/getting_started.md -[2]: https://github.com/DataDog/datadog-operator -[3]: https://operatorhub.io/operator/datadog-operator +## Prerequisites + +Using the Datadog Operator requires the following prerequisites: + +- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, due to limited CRD support, the operator may not work as expected. +- [`Helm`][2] for deploying the `datadog-operator`. +- [`Kubectl` CLI][3] for installing the `datadog-agent`. + +## Deploy an Agent with the operator + +To deploy a Datadog Agent with the operator in the minimum number of steps, use the [`datadog-agent-with-operator`][4] Helm chart. + + +1. [Download the chart][5]: + + ```shell + curl -Lo datadog-agent-with-operator.tar.gz https://github.com/DataDog/datadog-operator/releases/latest/download/datadog-agent-with-operator.tar.gz + ``` + +2. Create a file with the spec of your agent. The simplest configuration is: + + ```yaml + credentials: + apiKey: + appKey: + agent: + image: + name: "datadog/agent:latest" + ``` + + Replace `` and `` with your [Datadog API and application keys][6] + +3. Deploy the Datadog agent with the above configuration file: + ```shell + helm install --set-file agent_spec=/path/to/your/datadog-agent.yaml datadog datadog-agent-with-operator.tar.gz + ``` + +## Cleanup + +The following command deletes all the Kubernetes resources created by the above instructions: + +```shell +kubectl delete datadogagent datadog +helm delete datadog +``` + + +[1]: https://github.com/DataDog/datadog-operator +[2]: https://helm.sh +[3]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[4]: https://github.com/DataDog/datadog-operator/tree/master/chart/datadog-agent-with-operator +[5]: https://github.com/DataDog/datadog-operator/releases/latest/download/datadog-agent-with-operator.tar.gz +[6]: https://app.datadoghq.com/account/settings#api {{% /tab %}} {{< /tabs >}} From f64a5d1a88d2fd1dd9b20b3c40f4a4f448633a0a Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 9 Sep 2020 11:12:17 -0700 Subject: [PATCH 02/26] operator install section --- content/en/agent/kubernetes/_index.md | 130 ++++++++++++++++++++++---- 1 file changed, 112 insertions(+), 18 deletions(-) diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 343a7838bde83..15dab33912879 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -184,35 +184,122 @@ Using the Datadog Operator requires the following prerequisites: - [`Helm`][2] for deploying the `datadog-operator`. - [`Kubectl` CLI][3] for installing the `datadog-agent`. -## Deploy an Agent with the operator +## Deploy the Datadog Operator -To deploy a Datadog Agent with the operator in the minimum number of steps, use the [`datadog-agent-with-operator`][4] Helm chart. +To use the Datadog Operator, deploy it in your Kubernetes cluster. Then create a `DatadogAgent` Kubernetes resource that contains the Datadog deployment configuration: +1. Download the [Datadog Operator project zip ball][4]. Source code can be found at [`DataDog/datadog-operator`][5]. +2. Unzip the project, and go into the `./datadog-operator` folder. +3. Define your namespace and operator: -1. [Download the chart][5]: + ```shell + DD_NAMESPACE="datadog" + DD_NAMEOP="ddoperator" + ``` + +4. Create the namespace: ```shell - curl -Lo datadog-agent-with-operator.tar.gz https://github.com/DataDog/datadog-operator/releases/latest/download/datadog-agent-with-operator.tar.gz + kubectl create ns $DD_NAMESPACE ``` -2. Create a file with the spec of your agent. The simplest configuration is: +5. Install the operator with Helm: + + - Helm v2: - ```yaml - credentials: - apiKey: - appKey: - agent: - image: - name: "datadog/agent:latest" + ```shell + helm install --name $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator ``` - Replace `` and `` with your [Datadog API and application keys][6] + - Helm v3: -3. Deploy the Datadog agent with the above configuration file: ```shell - helm install --set-file agent_spec=/path/to/your/datadog-agent.yaml datadog datadog-agent-with-operator.tar.gz + helm install $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator ``` +## Deploy the Datadog Agents with the operator + +After deploying the Datadog Operator, create the `DatadogAgent` resource that triggers the Datadog Agent's deployment in your Kubernetes cluster. By creating this resource in the `Datadog-Operator` namespace, the Agent is deployed as a `DaemonSet` on every `Node` of your cluster. + +Create the `datadog-agent.yaml` manifest out of one of the following templates: + +* [Manifest with Logs, APM, process, and metrics collection enabled.][6] +* [Manifest with Logs, APM, and metrics collection enabled.][7] +* [Manifest with Logs and metrics collection enabled.][8] +* [Manifest with APM and metrics collection enabled.][9] +* [Manifest with Cluster Agent.][10] +* [Manifest with tolerations.][11] + +Replace `` and `` with your [Datadog API and application keys][12], then trigger the Agent installation with the following command: + +```shell +$ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml +datadogagent.datadoghq.com/datadog created +``` + +You can check the state of the `DatadogAgent` ressource with: + +```shell +kubectl get -n $DD_NAMESPACE dd datadog +NAME ACTIVE AGENT CLUSTER-AGENT CLUSTER-CHECKS-RUNNER AGE +datadog-agent True Running (2/2/2) 110m +``` + +In a 2-worker-nodes cluster, you should see the Agent pods created on each node. + +```shell +$ kubectl get -n $DD_NAMESPACE daemonset +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +datadog-agent 2 2 2 2 2 5m30s + +$ kubectl get -n $DD_NAMESPACE pod -owide +NAME READY STATUS RESTARTS AGE IP NODE +agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 1h 10.244.2.11 kind-worker +datadog-agent-k26tp 1/1 Running 0 5m59s 10.244.2.13 kind-worker +datadog-agent-zcxx7 1/1 Running 0 5m59s 10.244.1.7 kind-worker2 +``` +### Tolerations + +Update your `datadog-agent.yaml` file with the following configuration to add the toleration in the `Daemonset.spec.template` of your `DaemonSet` : + +```yaml +apiVersion: datadoghq.com/v1alpha1 +kind: DatadogAgent +metadata: + name: datadog +spec: + credentials: + apiKey: "" + appKey: "" + agent: + image: + name: "datadog/agent:latest" + config: + tolerations: + - operator: Exists +``` + +Apply this new configuration: + +```shell +$ kubectl apply -f datadog-agent.yaml +datadogagent.datadoghq.com/datadog updated +``` + +The DaemonSet update can be validated by looking at the new desired pod value: + +```shell +$ kubectl get -n $DD_NAMESPACE daemonset +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +datadog-agent 3 3 3 3 3 7m31s + +$ kubectl get -n $DD_NAMESPACE pod +NAME READY STATUS RESTARTS AGE +agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 15h +datadog-agent-5ctrq 1/1 Running 0 7m43s +datadog-agent-lkfqt 0/1 Running 0 15s +datadog-agent-zvdbw 1/1 Running 0 8m1s + ## Cleanup The following command deletes all the Kubernetes resources created by the above instructions: @@ -226,9 +313,16 @@ helm delete datadog [1]: https://github.com/DataDog/datadog-operator [2]: https://helm.sh [3]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ -[4]: https://github.com/DataDog/datadog-operator/tree/master/chart/datadog-agent-with-operator -[5]: https://github.com/DataDog/datadog-operator/releases/latest/download/datadog-agent-with-operator.tar.gz -[6]: https://app.datadoghq.com/account/settings#api +[4]: https://github.com/DataDog/datadog-operator/releases/latest +[5]: https://github.com/DataDog/datadog-operator +[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml +[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml +[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml +[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml +[10]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml +[11]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml +[12]: https://app.datadoghq.com/account/settings#api + {{% /tab %}} {{< /tabs >}} From a41720e486ebf92c4176e3ab8cdd1c3bf49df4fb Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 9 Sep 2020 11:16:24 -0700 Subject: [PATCH 03/26] closing line --- content/en/agent/kubernetes/_index.md | 34 +++++++++++++-------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 15dab33912879..12d12b12a7b9a 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -188,7 +188,7 @@ Using the Datadog Operator requires the following prerequisites: To use the Datadog Operator, deploy it in your Kubernetes cluster. Then create a `DatadogAgent` Kubernetes resource that contains the Datadog deployment configuration: -1. Download the [Datadog Operator project zip ball][4]. Source code can be found at [`DataDog/datadog-operator`][5]. +1. Download the [Datadog Operator project zip ball][4]. Source code can be found at [`DataDog/datadog-operator`][1]. 2. Unzip the project, and go into the `./datadog-operator` folder. 3. Define your namespace and operator: @@ -223,14 +223,14 @@ After deploying the Datadog Operator, create the `DatadogAgent` resource that tr Create the `datadog-agent.yaml` manifest out of one of the following templates: -* [Manifest with Logs, APM, process, and metrics collection enabled.][6] -* [Manifest with Logs, APM, and metrics collection enabled.][7] -* [Manifest with Logs and metrics collection enabled.][8] -* [Manifest with APM and metrics collection enabled.][9] -* [Manifest with Cluster Agent.][10] -* [Manifest with tolerations.][11] +* [Manifest with Logs, APM, process, and metrics collection enabled.][5] +* [Manifest with Logs, APM, and metrics collection enabled.][6] +* [Manifest with Logs and metrics collection enabled.][7] +* [Manifest with APM and metrics collection enabled.][8] +* [Manifest with Cluster Agent.][9] +* [Manifest with tolerations.][10] -Replace `` and `` with your [Datadog API and application keys][12], then trigger the Agent installation with the following command: +Replace `` and `` with your [Datadog API and application keys][11], then trigger the Agent installation with the following command: ```shell $ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml @@ -299,6 +299,7 @@ agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 15h datadog-agent-5ctrq 1/1 Running 0 7m43s datadog-agent-lkfqt 0/1 Running 0 15s datadog-agent-zvdbw 1/1 Running 0 8m1s +``` ## Cleanup @@ -310,19 +311,18 @@ helm delete datadog ``` + [1]: https://github.com/DataDog/datadog-operator [2]: https://helm.sh [3]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ [4]: https://github.com/DataDog/datadog-operator/releases/latest -[5]: https://github.com/DataDog/datadog-operator -[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml -[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml -[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml -[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml -[10]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml -[11]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml -[12]: https://app.datadoghq.com/account/settings#api - +[5]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml +[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml +[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml +[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml +[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml +[10]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml +[11]: https://app.datadoghq.com/account/settings#api {{% /tab %}} {{< /tabs >}} From 3536a09f757dac9e710dcc9e44a4158df40769be Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 9 Sep 2020 11:37:06 -0700 Subject: [PATCH 04/26] trying to fetch everything --- .gitignore | 1 + .gitlab-ci.yml | 2 +- Makefile | 2 ++ config/_default/menus/menus.en.yaml | 5 ++++ content/en/agent/kubernetes/apm.md | 22 ++++++++++++++++ content/en/agent/kubernetes/log.md | 22 ++++++++++++++++ .../py/build/configurations/pull_config.yaml | 26 +++++++++++++++++++ .../configurations/pull_config_preview.yaml | 26 +++++++++++++++++++ 8 files changed, 105 insertions(+), 1 deletion(-) diff --git a/.gitignore b/.gitignore index a4ca5be48276a..94bbd06519605 100644 --- a/.gitignore +++ b/.gitignore @@ -19,6 +19,7 @@ content/en/agent/basic_agent_usage/chef.md content/en/agent/basic_agent_usage/heroku.md content/en/agent/basic_agent_usage/puppet.md content/en/agent/basic_agent_usage/saltstack.md +content/en/agent/kubernetes/operator_configuration.md # Tracing content/en/tracing/setup/ruby.md diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index c71d5d03167b0..5d6d2711f1542 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -197,7 +197,7 @@ build_live: variables: CONFIG: ${LIVE_CONFIG} URL: ${LIVE_DOMAIN} - UNTRACKED_EXTRAS: "data,content/en/agent/basic_agent_usage/heroku.md,content/en/agent/basic_agent_usage/ansible.md,content/en/agent/basic_agent_usage/chef.md,content/en/agent/basic_agent_usage/puppet.md,content/en/developers/integrations,content/en/agent/basic_agent_usage/saltstack.md,content/en/developers/amazon_cloudformation.md,content/en/integrations,content/en/logs/log_collection/android.md,content/en/logs/log_collection/ios.md,content/en/tracing/setup/android.md,content/en/tracing/setup/ruby.md,content/en/security_monitoring/default_rules,content/en/serverless/forwarder.md,content/en/serverless/datadog_lambda_library/python.md,content/en/serverless/datadog_lambda_library/nodejs.md,content/en/serverless/datadog_lambda_library/ruby.md,content/en/serverless/datadog_lambda_library/go.md,content/en/serverless/datadog_lambda_library/java.md,content/en/real_user_monitoring/android.md" + UNTRACKED_EXTRAS: "data,content/en/agent/basic_agent_usage/heroku.md,content/en/agent/basic_agent_usage/ansible.md,content/en/agent/basic_agent_usage/chef.md,content/en/agent/basic_agent_usage/puppet.md,content/en/developers/integrations,content/en/agent/basic_agent_usage/saltstack.md,content/en/developers/amazon_cloudformation.md,content/en/integrations,content/en/logs/log_collection/android.md,content/en/logs/log_collection/ios.md,content/en/tracing/setup/android.md,content/en/tracing/setup/ruby.md,content/en/security_monitoring/default_rules,content/en/serverless/forwarder.md,content/en/serverless/datadog_lambda_library/python.md,content/en/serverless/datadog_lambda_library/nodejs.md,content/en/serverless/datadog_lambda_library/ruby.md,content/en/serverless/datadog_lambda_library/go.md,content/en/serverless/datadog_lambda_library/java.md,content/en/real_user_monitoring/android.md,content/en/agent/kubernetes/operator_configuration.md" CONFIGURATION_FILE: "./local/bin/py/build/configurations/pull_config.yaml" LOCAL: "False" script: diff --git a/Makefile b/Makefile index 510393aa28bf5..07ffc22643497 100644 --- a/Makefile +++ b/Makefile @@ -125,6 +125,8 @@ clean-auto-doc: ##Remove all doc automatically created rm -f content/en/logs/log_collection/ios.md ;fi @if [ content/en/tracing/setup/android.md ]; then \ rm -f content/en/tracing/setup/android.md ;fi + @if [ content/en/agent/kubernetes/operator_configuration.md ]; then \ + rm -f content/en/agent/kubernetes/operator_configuration.md ;fi clean-node: ## Remove node_modules. @if [ -d node_modules ]; then rm -r node_modules; fi diff --git a/config/_default/menus/menus.en.yaml b/config/_default/menus/menus.en.yaml index acd0eeafbb4a0..dff13334a81fc 100644 --- a/config/_default/menus/menus.en.yaml +++ b/config/_default/menus/menus.en.yaml @@ -248,6 +248,11 @@ main: parent: agent_kubernetes identifier: agent_kubernetes_data_collected weight: 306 + - name: Operator configuration + url: agent/kubernetes/operator_configuration + parent: agent_kubernetes + identifier: agent_kubernetes_operator_configuration + weight: 307 - name: Cluster Agent url: agent/cluster_agent/ parent: agent diff --git a/content/en/agent/kubernetes/apm.md b/content/en/agent/kubernetes/apm.md index c40937482e1ed..8dfcf0cb9c21e 100644 --- a/content/en/agent/kubernetes/apm.md +++ b/content/en/agent/kubernetes/apm.md @@ -81,6 +81,28 @@ To enable APM trace collection, open the DaemonSet configuration file and edit t # (...) ``` +{{% /tab %}} +{{% tab "Operator" %}} + +Update your `datadog-agent.yaml` manifest with: + +``` +agent: + image: + name: "datadog/agent:latest" + apm: + enabled: true +``` + +See the sample [manifest with APM and metrics collection enabled][1] for a complete example. + +Then apply the new configuration: + +```shell +$ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml +``` + +[1]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml {{% /tab %}} {{< /tabs >}} **Note**: On minikube, you may receive an `Unable to detect the kubelet URL automatically` error. In this case, set `DD_KUBELET_TLS_VERIFY=false`. diff --git a/content/en/agent/kubernetes/log.md b/content/en/agent/kubernetes/log.md index 3ea1cb8c3e506..780dd6da2c126 100644 --- a/content/en/agent/kubernetes/log.md +++ b/content/en/agent/kubernetes/log.md @@ -116,6 +116,28 @@ datadog: [1]: https://github.com/DataDog/helm-charts/blob/master/charts/datadog/values.yaml {{% /tab %}} +{{% tab "Operator" %}} + +Update your `datadog-agent.yaml` manifest with: + +``` +agent: + image: + name: "datadog/agent:latest" + log: + enabled: true +``` + +See the sample [manifest with logs and metrics collection enabled][1] for a complete example. + +Then apply the new configuration: + +```shell +$ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml +``` + +[1]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml +{{% /tab %}} {{< /tabs >}} **Note**: If you do want to collect logs from `/var/log/pods` even if the Docker socket is mounted, set the environment variable `DD_LOGS_CONFIG_K8S_CONTAINER_USE_FILE` (or `logs_config.k8s_container_use_file` in `datadog.yaml`) to `true` in order to force the Agent to go for the file collection mode. diff --git a/local/bin/py/build/configurations/pull_config.yaml b/local/bin/py/build/configurations/pull_config.yaml index e73b0b2a3dde3..56c44ded929be 100644 --- a/local/bin/py/build/configurations/pull_config.yaml +++ b/local/bin/py/build/configurations/pull_config.yaml @@ -392,3 +392,29 @@ - "runtime/**/*.md" options: dest_path: '/security_monitoring/default_rules/' + + - repo_name: datadog-operator + contents: + + - action: pull-and-push-file + branch: master + globs: + - 'docs/configuration.md' + options: + dest_path: '/agent/kubernetes/' + file_name: 'operator_configuration.md' + front_matters: + title: Datadog Operator Configuration + kind: documentation + description: "Configuration options for the Datadog Operator." + dependencies: ["https://github.com/DataDog/datadog-operator/blob/master/docs/configuration.md"] + further_reading: + - link: 'agent/kubernetes' + tag: 'Documentation' + text: 'Deploy Datadog with Kubernetes' + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Collect your application logs' + - link: '/agent/kubernetes/apm' + tag: 'Documentation' + text: 'Collect your application traces' diff --git a/local/bin/py/build/configurations/pull_config_preview.yaml b/local/bin/py/build/configurations/pull_config_preview.yaml index e73b0b2a3dde3..7e71e9f163bc9 100644 --- a/local/bin/py/build/configurations/pull_config_preview.yaml +++ b/local/bin/py/build/configurations/pull_config_preview.yaml @@ -392,3 +392,29 @@ - "runtime/**/*.md" options: dest_path: '/security_monitoring/default_rules/' + + - repo_name: datadog-operator + contents: + + - action: pull-and-push-file + branch: master + globs: + - 'docs/configuration.md' + options: + dest_path: '/agent/kubernetes/' + file_name: 'operator_configuration.md' + front_matters: + title: Datadog Operator Configuration + kind: documentation + description: "Configuration options for the Datadog Operator." + dependencies: ["https://github.com/DataDog/datadog-operator/blob/master/docs/configuration.md"] + further_reading: + - link: 'agent/kubernetes' + tag: 'Documentation' + text: 'Deploy Datadog with Kubernetes' + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Collect your application logs' + - link: '/agent/kubernetes/apm' + tag: 'Documentation' + text: 'Collect your application traces' From c1f0ad76041070f772e243f0539699269bb6d7ea Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 9 Sep 2020 11:39:35 -0700 Subject: [PATCH 05/26] fixed indent --- .../configurations/pull_config_preview.yaml | 50 +++++++++---------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/local/bin/py/build/configurations/pull_config_preview.yaml b/local/bin/py/build/configurations/pull_config_preview.yaml index 7e71e9f163bc9..56c44ded929be 100644 --- a/local/bin/py/build/configurations/pull_config_preview.yaml +++ b/local/bin/py/build/configurations/pull_config_preview.yaml @@ -392,29 +392,29 @@ - "runtime/**/*.md" options: dest_path: '/security_monitoring/default_rules/' - + - repo_name: datadog-operator - contents: - - - action: pull-and-push-file - branch: master - globs: - - 'docs/configuration.md' - options: - dest_path: '/agent/kubernetes/' - file_name: 'operator_configuration.md' - front_matters: - title: Datadog Operator Configuration - kind: documentation - description: "Configuration options for the Datadog Operator." - dependencies: ["https://github.com/DataDog/datadog-operator/blob/master/docs/configuration.md"] - further_reading: - - link: 'agent/kubernetes' - tag: 'Documentation' - text: 'Deploy Datadog with Kubernetes' - - link: 'agent/kubernetes/log' - tag: 'Documentation' - text: 'Collect your application logs' - - link: '/agent/kubernetes/apm' - tag: 'Documentation' - text: 'Collect your application traces' + contents: + + - action: pull-and-push-file + branch: master + globs: + - 'docs/configuration.md' + options: + dest_path: '/agent/kubernetes/' + file_name: 'operator_configuration.md' + front_matters: + title: Datadog Operator Configuration + kind: documentation + description: "Configuration options for the Datadog Operator." + dependencies: ["https://github.com/DataDog/datadog-operator/blob/master/docs/configuration.md"] + further_reading: + - link: 'agent/kubernetes' + tag: 'Documentation' + text: 'Deploy Datadog with Kubernetes' + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Collect your application logs' + - link: '/agent/kubernetes/apm' + tag: 'Documentation' + text: 'Collect your application traces' From d566a4b4821a57fb4060f5688a1d2f84e00bd087 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Thu, 10 Sep 2020 07:40:07 -0700 Subject: [PATCH 06/26] some style thing --- content/en/agent/kubernetes/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 12d12b12a7b9a..1872886a3b9ae 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -180,7 +180,7 @@ To install the Datadog Agent on your Kubernetes cluster: Using the Datadog Operator requires the following prerequisites: -- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, due to limited CRD support, the operator may not work as expected. +- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, because of limited CRD support, the operator may not work as expected. - [`Helm`][2] for deploying the `datadog-operator`. - [`Kubectl` CLI][3] for installing the `datadog-agent`. From 3261a65c9913ac09c9727d030da8a80c52e8a85c Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Fri, 18 Sep 2020 11:24:54 -0700 Subject: [PATCH 07/26] moving tolerations to an faq --- content/en/agent/faq/_index.md | 1 + content/en/agent/faq/operator-tolerations.md | 55 ++++++++++++++++++++ content/en/agent/kubernetes/_index.md | 41 --------------- 3 files changed, 56 insertions(+), 41 deletions(-) create mode 100644 content/en/agent/faq/operator-tolerations.md diff --git a/content/en/agent/faq/_index.md b/content/en/agent/faq/_index.md index 595da9a1f1be3..59009d253542c 100644 --- a/content/en/agent/faq/_index.md +++ b/content/en/agent/faq/_index.md @@ -25,4 +25,5 @@ aliases: {{< nextlink href="agent/faq/auto_conf" >}}Auto-configuration for Autodiscovery.{{< /nextlink >}} {{< nextlink href="agent/faq/template_variables" >}}Template variables used for Autodiscovery{{< /nextlink >}} {{< nextlink href="agent/faq/commonly-used-log-processing-rules" >}}Commonly Used Log Processing Rules{{< /nextlink >}} + {{< nextlink href="agent/faq/operator-tolerations" >}}Using tolerations with Datadog Operator{{< /nextlink >}} {{< /whatsnext >}} diff --git a/content/en/agent/faq/operator-tolerations.md b/content/en/agent/faq/operator-tolerations.md new file mode 100644 index 0000000000000..48ed4d7163261 --- /dev/null +++ b/content/en/agent/faq/operator-tolerations.md @@ -0,0 +1,55 @@ +--- +title: Using tolerations with Datadog Operator +kind: faq +further_reading: + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Datadog and Kubernetes' +--- + +### Tolerations + +Update your `datadog-agent.yaml` file with the following configuration to add the toleration in the `Daemonset.spec.template` of your `DaemonSet` : + +```yaml +apiVersion: datadoghq.com/v1alpha1 +kind: DatadogAgent +metadata: + name: datadog +spec: + credentials: + apiKey: "" + appKey: "" + agent: + image: + name: "datadog/agent:latest" + config: + tolerations: + - operator: Exists +``` + +Apply this new configuration: + +```shell +$ kubectl apply -f datadog-agent.yaml +datadogagent.datadoghq.com/datadog updated +``` + +The DaemonSet update can be validated by looking at the new desired pod value: + +```shell +$ kubectl get -n $DD_NAMESPACE daemonset +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +datadog-agent 3 3 3 3 3 7m31s + +$ kubectl get -n $DD_NAMESPACE pod +NAME READY STATUS RESTARTS AGE +agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 15h +datadog-agent-5ctrq 1/1 Running 0 7m43s +datadog-agent-lkfqt 0/1 Running 0 15s +datadog-agent-zvdbw 1/1 Running 0 8m1s +``` + +## Further Reading + +{{< partial name="whats-next/whats-next.html" >}} diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 1872886a3b9ae..4b13eeedee54d 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -258,48 +258,7 @@ agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 1h datadog-agent-k26tp 1/1 Running 0 5m59s 10.244.2.13 kind-worker datadog-agent-zcxx7 1/1 Running 0 5m59s 10.244.1.7 kind-worker2 ``` -### Tolerations - -Update your `datadog-agent.yaml` file with the following configuration to add the toleration in the `Daemonset.spec.template` of your `DaemonSet` : - -```yaml -apiVersion: datadoghq.com/v1alpha1 -kind: DatadogAgent -metadata: - name: datadog -spec: - credentials: - apiKey: "" - appKey: "" - agent: - image: - name: "datadog/agent:latest" - config: - tolerations: - - operator: Exists -``` - -Apply this new configuration: - -```shell -$ kubectl apply -f datadog-agent.yaml -datadogagent.datadoghq.com/datadog updated -``` -The DaemonSet update can be validated by looking at the new desired pod value: - -```shell -$ kubectl get -n $DD_NAMESPACE daemonset -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -datadog-agent 3 3 3 3 3 7m31s - -$ kubectl get -n $DD_NAMESPACE pod -NAME READY STATUS RESTARTS AGE -agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 15h -datadog-agent-5ctrq 1/1 Running 0 7m43s -datadog-agent-lkfqt 0/1 Running 0 15s -datadog-agent-zvdbw 1/1 Running 0 8m1s -``` ## Cleanup From 534cc218164c452d73522cde918eb45ecffc7a5d Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Fri, 18 Sep 2020 11:34:07 -0700 Subject: [PATCH 08/26] adding operator to event collection section --- content/en/agent/kubernetes/_index.md | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 4b13eeedee54d..004c27187ab27 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -306,9 +306,22 @@ Set the `datadog.leaderElection`, `datadog.collectEvents` and `agents.rbac.creat {{% /tab %}} {{% tab "DaemonSet" %}} -If you want to collect events from your kubernetes cluster set the environment variables `DD_COLLECT_KUBERNETES_EVENTS` and `DD_LEADER_ELECTION` to `true` in your Agent manifest. Alternatively, use the [Datadoc Cluster Agent Event collection][1] +If you want to collect events from your Kubernetes cluster set the environment variables `DD_COLLECT_KUBERNETES_EVENTS` and `DD_LEADER_ELECTION` to `true` in your Agent manifest. Alternatively, use the [Datadoc Cluster Agent Event collection][1] [1]: /agent/cluster_agent/event_collection/ +{{% /tab %}} +{{% tab "Operator" %}} + +Set `agent.config.collectEvents` to `true` in your `datadog-agent.yaml` manifest. + +For example: + +``` +agent: + config: + collectEvents: true +``` + {{% /tab %}} {{< /tabs >}} From c63709695f8ce174cbdd809d908b2affc7f72a53 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Fri, 18 Sep 2020 11:45:54 -0700 Subject: [PATCH 09/26] moving advanced stuff to a guide --- content/en/agent/faq/_index.md | 1 - content/en/agent/faq/operator-tolerations.md | 55 ------- content/en/agent/guide/_index.md | 1 + content/en/agent/guide/operator-advanced.md | 162 +++++++++++++++++++ content/en/agent/kubernetes/_index.md | 92 +++-------- 5 files changed, 184 insertions(+), 127 deletions(-) delete mode 100644 content/en/agent/faq/operator-tolerations.md create mode 100644 content/en/agent/guide/operator-advanced.md diff --git a/content/en/agent/faq/_index.md b/content/en/agent/faq/_index.md index 59009d253542c..595da9a1f1be3 100644 --- a/content/en/agent/faq/_index.md +++ b/content/en/agent/faq/_index.md @@ -25,5 +25,4 @@ aliases: {{< nextlink href="agent/faq/auto_conf" >}}Auto-configuration for Autodiscovery.{{< /nextlink >}} {{< nextlink href="agent/faq/template_variables" >}}Template variables used for Autodiscovery{{< /nextlink >}} {{< nextlink href="agent/faq/commonly-used-log-processing-rules" >}}Commonly Used Log Processing Rules{{< /nextlink >}} - {{< nextlink href="agent/faq/operator-tolerations" >}}Using tolerations with Datadog Operator{{< /nextlink >}} {{< /whatsnext >}} diff --git a/content/en/agent/faq/operator-tolerations.md b/content/en/agent/faq/operator-tolerations.md deleted file mode 100644 index 48ed4d7163261..0000000000000 --- a/content/en/agent/faq/operator-tolerations.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Using tolerations with Datadog Operator -kind: faq -further_reading: - - link: 'agent/kubernetes/log' - tag: 'Documentation' - text: 'Datadog and Kubernetes' ---- - -### Tolerations - -Update your `datadog-agent.yaml` file with the following configuration to add the toleration in the `Daemonset.spec.template` of your `DaemonSet` : - -```yaml -apiVersion: datadoghq.com/v1alpha1 -kind: DatadogAgent -metadata: - name: datadog -spec: - credentials: - apiKey: "" - appKey: "" - agent: - image: - name: "datadog/agent:latest" - config: - tolerations: - - operator: Exists -``` - -Apply this new configuration: - -```shell -$ kubectl apply -f datadog-agent.yaml -datadogagent.datadoghq.com/datadog updated -``` - -The DaemonSet update can be validated by looking at the new desired pod value: - -```shell -$ kubectl get -n $DD_NAMESPACE daemonset -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -datadog-agent 3 3 3 3 3 7m31s - -$ kubectl get -n $DD_NAMESPACE pod -NAME READY STATUS RESTARTS AGE -agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 15h -datadog-agent-5ctrq 1/1 Running 0 7m43s -datadog-agent-lkfqt 0/1 Running 0 15s -datadog-agent-zvdbw 1/1 Running 0 8m1s -``` - -## Further Reading - -{{< partial name="whats-next/whats-next.html" >}} diff --git a/content/en/agent/guide/_index.md b/content/en/agent/guide/_index.md index 1cb1132be2370..b11d9d5e8a9d6 100644 --- a/content/en/agent/guide/_index.md +++ b/content/en/agent/guide/_index.md @@ -23,6 +23,7 @@ private: true {{< nextlink href="agent/guide/build-container-agent" >}}Build a Datadog Agent image{{< /nextlink >}} {{< nextlink href="agent/guide/autodiscovery-management" >}}Manage container discovery with the Agent.{{< /nextlink >}} {{< nextlink href="agent/guide/ad_identifiers" >}}Apply an Autodiscovery configuration file template to a given container with the ad_identifers parameter.{{< /nextlink >}} + {{< nextlink href="agent/guide/operator-advanced" >}}Advanced setup for Datadog Operator.{{< /nextlink >}} {{< /whatsnext >}}
{{< whatsnext desc="Agent 5 Guides:" >}} diff --git a/content/en/agent/guide/operator-advanced.md b/content/en/agent/guide/operator-advanced.md new file mode 100644 index 0000000000000..5d05d2976b3b7 --- /dev/null +++ b/content/en/agent/guide/operator-advanced.md @@ -0,0 +1,162 @@ +--- +title: Advanced setup for Datadog Operator +kind: faq +further_reading: + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Datadog and Kubernetes' +--- + +[The Datadog Operator][1] is in public beta. The Datadog Operator is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. + +## Prerequisites + +Using the Datadog Operator requires the following prerequisites: + +- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, because of limited CRD support, the operator may not work as expected. +- [`Helm`][2] for deploying the `datadog-operator`. +- [`Kubectl` CLI][3] for installing the `datadog-agent`. + +## Deploy the Datadog Operator + +To use the Datadog Operator, deploy it in your Kubernetes cluster. Then create a `DatadogAgent` Kubernetes resource that contains the Datadog deployment configuration: + +1. Download the [Datadog Operator project zip ball][4]. Source code can be found at [`DataDog/datadog-operator`][1]. +2. Unzip the project, and go into the `./datadog-operator` folder. +3. Define your namespace and operator: + + ```shell + DD_NAMESPACE="datadog" + DD_NAMEOP="ddoperator" + ``` + +4. Create the namespace: + + ```shell + kubectl create ns $DD_NAMESPACE + ``` + +5. Install the operator with Helm: + + - Helm v2: + + ```shell + helm install --name $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator + ``` + + - Helm v3: + + ```shell + helm install $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator + ``` + +## Deploy the Datadog Agents with the operator + +After deploying the Datadog Operator, create the `DatadogAgent` resource that triggers the Datadog Agent's deployment in your Kubernetes cluster. By creating this resource in the `Datadog-Operator` namespace, the Agent is deployed as a `DaemonSet` on every `Node` of your cluster. + +Create the `datadog-agent.yaml` manifest out of one of the following templates: + +* [Manifest with Logs, APM, process, and metrics collection enabled.][5] +* [Manifest with Logs, APM, and metrics collection enabled.][6] +* [Manifest with Logs and metrics collection enabled.][7] +* [Manifest with APM and metrics collection enabled.][8] +* [Manifest with Cluster Agent.][9] +* [Manifest with tolerations.][10] + +Replace `` and `` with your [Datadog API and application keys][11], then trigger the Agent installation with the following command: + +```shell +$ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml +datadogagent.datadoghq.com/datadog created +``` + +You can check the state of the `DatadogAgent` ressource with: + +```shell +kubectl get -n $DD_NAMESPACE dd datadog +NAME ACTIVE AGENT CLUSTER-AGENT CLUSTER-CHECKS-RUNNER AGE +datadog-agent True Running (2/2/2) 110m +``` + +In a 2-worker-nodes cluster, you should see the Agent pods created on each node. + +```shell +$ kubectl get -n $DD_NAMESPACE daemonset +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +datadog-agent 2 2 2 2 2 5m30s + +$ kubectl get -n $DD_NAMESPACE pod -owide +NAME READY STATUS RESTARTS AGE IP NODE +agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 1h 10.244.2.11 kind-worker +datadog-agent-k26tp 1/1 Running 0 5m59s 10.244.2.13 kind-worker +datadog-agent-zcxx7 1/1 Running 0 5m59s 10.244.1.7 kind-worker2 +``` + + +## Cleanup + +The following command deletes all the Kubernetes resources created by the above instructions: + +```shell +kubectl delete datadogagent datadog +helm delete datadog +``` + +### Tolerations + +Update your `datadog-agent.yaml` file with the following configuration to add the toleration in the `Daemonset.spec.template` of your `DaemonSet` : + +```yaml +apiVersion: datadoghq.com/v1alpha1 +kind: DatadogAgent +metadata: + name: datadog +spec: + credentials: + apiKey: "" + appKey: "" + agent: + image: + name: "datadog/agent:latest" + config: + tolerations: + - operator: Exists +``` + +Apply this new configuration: + +```shell +$ kubectl apply -f datadog-agent.yaml +datadogagent.datadoghq.com/datadog updated +``` + +The DaemonSet update can be validated by looking at the new desired pod value: + +```shell +$ kubectl get -n $DD_NAMESPACE daemonset +NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE +datadog-agent 3 3 3 3 3 7m31s + +$ kubectl get -n $DD_NAMESPACE pod +NAME READY STATUS RESTARTS AGE +agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 15h +datadog-agent-5ctrq 1/1 Running 0 7m43s +datadog-agent-lkfqt 0/1 Running 0 15s +datadog-agent-zvdbw 1/1 Running 0 8m1s +``` + +## Further Reading + +{{< partial name="whats-next/whats-next.html" >}} + +[1]: https://github.com/DataDog/datadog-operator +[2]: https://helm.sh +[3]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[4]: https://github.com/DataDog/datadog-operator/releases/latest +[5]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml +[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml +[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml +[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml +[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml +[10]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml +[11]: https://app.datadoghq.com/account/settings#api diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 004c27187ab27..25aeed4d8a7cc 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -184,82 +184,36 @@ Using the Datadog Operator requires the following prerequisites: - [`Helm`][2] for deploying the `datadog-operator`. - [`Kubectl` CLI][3] for installing the `datadog-agent`. -## Deploy the Datadog Operator -To use the Datadog Operator, deploy it in your Kubernetes cluster. Then create a `DatadogAgent` Kubernetes resource that contains the Datadog deployment configuration: +## Deploy an Agent with the operator -1. Download the [Datadog Operator project zip ball][4]. Source code can be found at [`DataDog/datadog-operator`][1]. -2. Unzip the project, and go into the `./datadog-operator` folder. -3. Define your namespace and operator: +To deploy a Datadog Agent with the operator in the minimum number of steps, use the [`datadog-agent-with-operator`][4] Helm chart. +Here are the steps: - ```shell - DD_NAMESPACE="datadog" - DD_NAMEOP="ddoperator" - ``` - -4. Create the namespace: +1. [Download the chart][5]: ```shell - kubectl create ns $DD_NAMESPACE + curl -Lo datadog-agent-with-operator.tar.gz https://github.com/DataDog/datadog-operator/releases/latest/download/datadog-agent-with-operator.tar.gz ``` -5. Install the operator with Helm: +2. Create a file with the spec of your Agent. The simplest configuration is: - - Helm v2: - - ```shell - helm install --name $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator + ```yaml + credentials: + apiKey: + appKey: + agent: + image: + name: "datadog/agent:latest" ``` - - Helm v3: + Replace `` and `` with your [Datadog API and application keys][6] +3. Deploy the Datadog Agent with the above configuration file: ```shell - helm install $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator + helm install --set-file agent_spec=/path/to/your/datadog-agent.yaml datadog datadog-agent-with-operator.tar.gz ``` -## Deploy the Datadog Agents with the operator - -After deploying the Datadog Operator, create the `DatadogAgent` resource that triggers the Datadog Agent's deployment in your Kubernetes cluster. By creating this resource in the `Datadog-Operator` namespace, the Agent is deployed as a `DaemonSet` on every `Node` of your cluster. - -Create the `datadog-agent.yaml` manifest out of one of the following templates: - -* [Manifest with Logs, APM, process, and metrics collection enabled.][5] -* [Manifest with Logs, APM, and metrics collection enabled.][6] -* [Manifest with Logs and metrics collection enabled.][7] -* [Manifest with APM and metrics collection enabled.][8] -* [Manifest with Cluster Agent.][9] -* [Manifest with tolerations.][10] - -Replace `` and `` with your [Datadog API and application keys][11], then trigger the Agent installation with the following command: - -```shell -$ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml -datadogagent.datadoghq.com/datadog created -``` - -You can check the state of the `DatadogAgent` ressource with: - -```shell -kubectl get -n $DD_NAMESPACE dd datadog -NAME ACTIVE AGENT CLUSTER-AGENT CLUSTER-CHECKS-RUNNER AGE -datadog-agent True Running (2/2/2) 110m -``` - -In a 2-worker-nodes cluster, you should see the Agent pods created on each node. - -```shell -$ kubectl get -n $DD_NAMESPACE daemonset -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -datadog-agent 2 2 2 2 2 5m30s - -$ kubectl get -n $DD_NAMESPACE pod -owide -NAME READY STATUS RESTARTS AGE IP NODE -agent-datadog-operator-d897fc9b-7wbsf 1/1 Running 0 1h 10.244.2.11 kind-worker -datadog-agent-k26tp 1/1 Running 0 5m59s 10.244.2.13 kind-worker -datadog-agent-zcxx7 1/1 Running 0 5m59s 10.244.1.7 kind-worker2 -``` - - ## Cleanup The following command deletes all the Kubernetes resources created by the above instructions: @@ -269,19 +223,15 @@ kubectl delete datadogagent datadog helm delete datadog ``` - +For further details on setting up Operator, including information about using tolerations, refer to the [Datadog Operator advanced setup guide][7]. [1]: https://github.com/DataDog/datadog-operator [2]: https://helm.sh [3]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ -[4]: https://github.com/DataDog/datadog-operator/releases/latest -[5]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml -[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml -[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml -[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml -[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml -[10]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml -[11]: https://app.datadoghq.com/account/settings#api +[4]: https://github.com/DataDog/datadog-operator/tree/master/chart/datadog-agent-with-operator +[5]: https://github.com/DataDog/datadog-operator/releases/latest/download/datadog-agent-with-operator.tar.gz +[6]: https://app.datadoghq.com/account/settings#api +[7]: /agent/guide/operator-advanced {{% /tab %}} {{< /tabs >}} From 5faa96778328aed8c0399773ed0903e0e9567f7c Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Fri, 18 Sep 2020 12:26:32 -0700 Subject: [PATCH 10/26] fixing indent --- content/en/agent/guide/operator-advanced.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/agent/guide/operator-advanced.md b/content/en/agent/guide/operator-advanced.md index 5d05d2976b3b7..ab51629533e80 100644 --- a/content/en/agent/guide/operator-advanced.md +++ b/content/en/agent/guide/operator-advanced.md @@ -2,9 +2,9 @@ title: Advanced setup for Datadog Operator kind: faq further_reading: - - link: 'agent/kubernetes/log' - tag: 'Documentation' - text: 'Datadog and Kubernetes' + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Datadog and Kubernetes' --- [The Datadog Operator][1] is in public beta. The Datadog Operator is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. From 401bead2f9f449d41ef252f8cadca50281e10e41 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Fri, 18 Sep 2020 13:32:52 -0700 Subject: [PATCH 11/26] adding operator env variables to apm --- content/en/agent/kubernetes/apm.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/content/en/agent/kubernetes/apm.md b/content/en/agent/kubernetes/apm.md index 8dfcf0cb9c21e..ac52137e9efdd 100644 --- a/content/en/agent/kubernetes/apm.md +++ b/content/en/agent/kubernetes/apm.md @@ -156,6 +156,16 @@ List of all environment variables available for tracing within the Agent running | `DD_APM_MAX_EPS` | Sets the maximum Analyzed Spans per second. Default is 200 events per second. | | `DD_APM_MAX_TPS` | Sets the maximum traces per second. Default is 10 traces per second. | +### Operator environment variables +| Environment variable | Description | +| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | +| `agent.apm.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | | + + ## Further Reading {{< partial name="whats-next/whats-next.html" >}} From f7854f1b469768491222bebaa58f279508b1e9b4 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 09:44:25 -0700 Subject: [PATCH 12/26] updates --- .gitignore | 1 - .gitlab-ci.yml | 2 +- Makefile | 2 -- .../kubernetes/operator_configuration.md | 10 +++++++ .../py/build/configurations/pull_config.yaml | 26 ------------------- .../configurations/pull_config_preview.yaml | 26 ------------------- 6 files changed, 11 insertions(+), 56 deletions(-) create mode 100644 content/en/agent/kubernetes/operator_configuration.md diff --git a/.gitignore b/.gitignore index 94bbd06519605..a4ca5be48276a 100644 --- a/.gitignore +++ b/.gitignore @@ -19,7 +19,6 @@ content/en/agent/basic_agent_usage/chef.md content/en/agent/basic_agent_usage/heroku.md content/en/agent/basic_agent_usage/puppet.md content/en/agent/basic_agent_usage/saltstack.md -content/en/agent/kubernetes/operator_configuration.md # Tracing content/en/tracing/setup/ruby.md diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index c1c21e0a93aed..04442007bfbf9 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -201,7 +201,7 @@ build_live: variables: CONFIG: ${LIVE_CONFIG} URL: ${LIVE_DOMAIN} - UNTRACKED_EXTRAS: "data,content/en/agent/basic_agent_usage/heroku.md,content/en/agent/basic_agent_usage/ansible.md,content/en/agent/basic_agent_usage/chef.md,content/en/agent/basic_agent_usage/puppet.md,content/en/developers/integrations,content/en/agent/basic_agent_usage/saltstack.md,content/en/developers/amazon_cloudformation.md,content/en/integrations,content/en/logs/log_collection/android.md,content/en/logs/log_collection/ios.md,content/en/tracing/setup/android.md,content/en/tracing/setup/ruby.md,content/en/security_monitoring/default_rules,content/en/serverless/forwarder.md,content/en/serverless/datadog_lambda_library/python.md,content/en/serverless/datadog_lambda_library/nodejs.md,content/en/serverless/datadog_lambda_library/ruby.md,content/en/serverless/datadog_lambda_library/go.md,content/en/serverless/datadog_lambda_library/java.md,content/en/real_user_monitoring/android.md,content/en/agent/kubernetes/operator_configuration.md" + UNTRACKED_EXTRAS: "data,content/en/agent/basic_agent_usage/heroku.md,content/en/agent/basic_agent_usage/ansible.md,content/en/agent/basic_agent_usage/chef.md,content/en/agent/basic_agent_usage/puppet.md,content/en/developers/integrations,content/en/agent/basic_agent_usage/saltstack.md,content/en/developers/amazon_cloudformation.md,content/en/integrations,content/en/logs/log_collection/android.md,content/en/logs/log_collection/ios.md,content/en/tracing/setup/android.md,content/en/tracing/setup/ruby.md,content/en/security_monitoring/default_rules,content/en/serverless/forwarder.md,content/en/serverless/datadog_lambda_library/python.md,content/en/serverless/datadog_lambda_library/nodejs.md,content/en/serverless/datadog_lambda_library/ruby.md,content/en/serverless/datadog_lambda_library/go.md,content/en/serverless/datadog_lambda_library/java.md,content/en/real_user_monitoring/android.md" CONFIGURATION_FILE: "./local/bin/py/build/configurations/pull_config.yaml" LOCAL: "False" script: diff --git a/Makefile b/Makefile index 07ffc22643497..510393aa28bf5 100644 --- a/Makefile +++ b/Makefile @@ -125,8 +125,6 @@ clean-auto-doc: ##Remove all doc automatically created rm -f content/en/logs/log_collection/ios.md ;fi @if [ content/en/tracing/setup/android.md ]; then \ rm -f content/en/tracing/setup/android.md ;fi - @if [ content/en/agent/kubernetes/operator_configuration.md ]; then \ - rm -f content/en/agent/kubernetes/operator_configuration.md ;fi clean-node: ## Remove node_modules. @if [ -d node_modules ]; then rm -r node_modules; fi diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md new file mode 100644 index 0000000000000..695e802af6801 --- /dev/null +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -0,0 +1,10 @@ +--- +title: Operator configuration +kind: faq +further_reading: + - link: 'agent/kubernetes/log' + tag: 'Documentation' + text: 'Datadog and Kubernetes' +--- + +tk \ No newline at end of file diff --git a/local/bin/py/build/configurations/pull_config.yaml b/local/bin/py/build/configurations/pull_config.yaml index 56c44ded929be..e73b0b2a3dde3 100644 --- a/local/bin/py/build/configurations/pull_config.yaml +++ b/local/bin/py/build/configurations/pull_config.yaml @@ -392,29 +392,3 @@ - "runtime/**/*.md" options: dest_path: '/security_monitoring/default_rules/' - - - repo_name: datadog-operator - contents: - - - action: pull-and-push-file - branch: master - globs: - - 'docs/configuration.md' - options: - dest_path: '/agent/kubernetes/' - file_name: 'operator_configuration.md' - front_matters: - title: Datadog Operator Configuration - kind: documentation - description: "Configuration options for the Datadog Operator." - dependencies: ["https://github.com/DataDog/datadog-operator/blob/master/docs/configuration.md"] - further_reading: - - link: 'agent/kubernetes' - tag: 'Documentation' - text: 'Deploy Datadog with Kubernetes' - - link: 'agent/kubernetes/log' - tag: 'Documentation' - text: 'Collect your application logs' - - link: '/agent/kubernetes/apm' - tag: 'Documentation' - text: 'Collect your application traces' diff --git a/local/bin/py/build/configurations/pull_config_preview.yaml b/local/bin/py/build/configurations/pull_config_preview.yaml index 56c44ded929be..e73b0b2a3dde3 100644 --- a/local/bin/py/build/configurations/pull_config_preview.yaml +++ b/local/bin/py/build/configurations/pull_config_preview.yaml @@ -392,29 +392,3 @@ - "runtime/**/*.md" options: dest_path: '/security_monitoring/default_rules/' - - - repo_name: datadog-operator - contents: - - - action: pull-and-push-file - branch: master - globs: - - 'docs/configuration.md' - options: - dest_path: '/agent/kubernetes/' - file_name: 'operator_configuration.md' - front_matters: - title: Datadog Operator Configuration - kind: documentation - description: "Configuration options for the Datadog Operator." - dependencies: ["https://github.com/DataDog/datadog-operator/blob/master/docs/configuration.md"] - further_reading: - - link: 'agent/kubernetes' - tag: 'Documentation' - text: 'Deploy Datadog with Kubernetes' - - link: 'agent/kubernetes/log' - tag: 'Documentation' - text: 'Collect your application logs' - - link: '/agent/kubernetes/apm' - tag: 'Documentation' - text: 'Collect your application traces' From 7dbd06d2e39c7916b43d8d7d2ee9171ccc1ab562 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 10:36:03 -0700 Subject: [PATCH 13/26] adding back operator stuff --- content/en/agent/guide/operator-advanced.md | 68 +++--- .../kubernetes/operator_configuration.md | 203 +++++++++++++++++- 2 files changed, 226 insertions(+), 45 deletions(-) diff --git a/content/en/agent/guide/operator-advanced.md b/content/en/agent/guide/operator-advanced.md index ab51629533e80..63b29d242a3e2 100644 --- a/content/en/agent/guide/operator-advanced.md +++ b/content/en/agent/guide/operator-advanced.md @@ -21,49 +21,30 @@ Using the Datadog Operator requires the following prerequisites: To use the Datadog Operator, deploy it in your Kubernetes cluster. Then create a `DatadogAgent` Kubernetes resource that contains the Datadog deployment configuration: -1. Download the [Datadog Operator project zip ball][4]. Source code can be found at [`DataDog/datadog-operator`][1]. -2. Unzip the project, and go into the `./datadog-operator` folder. -3. Define your namespace and operator: - - ```shell - DD_NAMESPACE="datadog" - DD_NAMEOP="ddoperator" - ``` - -4. Create the namespace: - - ```shell - kubectl create ns $DD_NAMESPACE - ``` - -5. Install the operator with Helm: - - - Helm v2: - - ```shell - helm install --name $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator - ``` - - - Helm v3: - - ```shell - helm install $DD_NAMEOP -n $DD_NAMESPACE ./chart/datadog-operator - ``` - +1. Add the Datadog Helm repo: + ``` + helm repo add datadog https://helm.datadoghq.com + ``` + +2. Install the Datadog Operator: + ``` + helm install datadog/datadog-operator + ``` + ## Deploy the Datadog Agents with the operator After deploying the Datadog Operator, create the `DatadogAgent` resource that triggers the Datadog Agent's deployment in your Kubernetes cluster. By creating this resource in the `Datadog-Operator` namespace, the Agent is deployed as a `DaemonSet` on every `Node` of your cluster. Create the `datadog-agent.yaml` manifest out of one of the following templates: -* [Manifest with Logs, APM, process, and metrics collection enabled.][5] -* [Manifest with Logs, APM, and metrics collection enabled.][6] -* [Manifest with Logs and metrics collection enabled.][7] -* [Manifest with APM and metrics collection enabled.][8] -* [Manifest with Cluster Agent.][9] -* [Manifest with tolerations.][10] +* [Manifest with Logs, APM, process, and metrics collection enabled.][4] +* [Manifest with Logs, APM, and metrics collection enabled.][5] +* [Manifest with Logs and metrics collection enabled.][6] +* [Manifest with APM and metrics collection enabled.][7] +* [Manifest with Cluster Agent.][8] +* [Manifest with tolerations.][9] -Replace `` and `` with your [Datadog API and application keys][11], then trigger the Agent installation with the following command: +Replace `` and `` with your [Datadog API and application keys][10], then trigger the Agent installation with the following command: ```shell $ kubectl apply -n $DD_NAMESPACE -f datadog-agent.yaml @@ -152,11 +133,10 @@ datadog-agent-zvdbw 1/1 Running 0 8m1s [1]: https://github.com/DataDog/datadog-operator [2]: https://helm.sh [3]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ -[4]: https://github.com/DataDog/datadog-operator/releases/latest -[5]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml -[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml -[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml -[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml -[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml -[10]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml -[11]: https://app.datadoghq.com/account/settings#api +[4]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-all.yaml +[5]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs-apm.yaml +[6]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-logs.yaml +[7]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-apm.yaml +[8]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-clusteragent.yaml +[9]: https://github.com/DataDog/datadog-operator/blob/master/examples/datadog-agent-with-tolerations.yaml +[10]: https://app.datadoghq.com/account/settings#api diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 695e802af6801..3615669d45c05 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -7,4 +7,205 @@ further_reading: text: 'Datadog and Kubernetes' --- -tk \ No newline at end of file +## All configuration options + +The following table lists the configurable parameters for the `DatadogAgent` +resource. For example, if you wanted to set a value for `agent.image.name`, +your `DatadogAgent` resource would look like the following: + +```yaml +apiVersion: datadoghq.com/v1alpha1 +kind: DatadogAgent +metadata: + name: datadog +spec: + agent: + image: + name: "datadog/agent:latest" +``` + +| Parameter | Description | +|--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `agent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | +| `agent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | +| `agent.apm.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.config.checksd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | +| `agent.config.collectEvents` | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | +| `agent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | +| `agent.config.criSocket.criSocketPath` | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | +| `agent.config.criSocket.dockerSocketPath` | Path to the docker runtime socket | +| `agent.config.ddUrl` | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | +| `agent.config.dogstatsd.dogstatsdOriginDetection` | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | +| `agent.config.dogstatsd.useDogStatsDSocketVolume` | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | +| `agent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.config.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| `agent.config.leaderElection` | Enables leader election mechanism for event collection. | +| `agent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| `agent.config.podAnnotationsAsTags` | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | +| `agent.config.podLabelsAsTags` | Provide a mapping of Kubernetes Labels to Datadog Tags. : | +| `agent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.config.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| `agent.config.securityContext.capabilities.add` | Added capabilities | +| `agent.config.securityContext.capabilities.drop` | Removed capabilities | +| `agent.config.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| `agent.config.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| `agent.config.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | +| `agent.config.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.config.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.config.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.config.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | +| `agent.config.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | +| `agent.config.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | +| `agent.config.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | +| `agent.config.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.config.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.config.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| `agent.config.tags` | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | +| `agent.config.tolerations` | If specified, the Agent pod's tolerations. | +| `agent.config.volumeMounts` | Specify additional volume mounts in the Datadog Agent container | +| `agent.config.volumes` | Specify additional volumes in the Datadog Agent container | +| `agent.customConfig.configData` | ConfigData corresponds to the configuration file content | +| `agent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| `agent.customConfig.configMap.name` | Name the ConfigMap name | +| `agent.daemonsetName` | Name of the Daemonset to create or migrate from | +| `agent.deploymentStrategy.canary.duration` | | +| `agent.deploymentStrategy.canary.paused` | | +| `agent.deploymentStrategy.canary.replicas` | | +| `agent.deploymentStrategy.reconcileFrequency` | The reconcile frequency of the ExtendDaemonSet | +| `agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation` | The maxium number of pods created in parallel. Default value is 250. | +| `agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure` | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | +| `agent.deploymentStrategy.rollingUpdate.maxUnavailable` | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | +| `agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease` | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | +| `agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration` | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | +| `agent.deploymentStrategy.updateStrategyType` | The update strategy used for the DaemonSet | +| `agent.dnsConfig.nameservers` | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | +| `agent.dnsConfig.options` | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | +| `agent.dnsConfig.searches` | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | +| `agent.dnsPolicy` | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | +| `agent.env` | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.hostNetwork` | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | +| `agent.hostPID` | Use the host's pid namespace. Optional: Default to false. | +| `agent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| `agent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| `agent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `agent.log.containerCollectUsingFiles` | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | +| `agent.log.containerLogsPath` | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to `/var/lib/docker/containers` | +| `agent.log.enabled` | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| `agent.log.logsConfigContainerCollectAll` | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| `agent.log.openFilesLimit` | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | +| `agent.log.podLogsPath` | This to allow log collection from pod log path. Default to `/var/log/pods` | +| `agent.log.tempStoragePath` | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to `/var/lib/datadog-agent/logs` | +| `agent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| `agent.process.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| `agent.process.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.process.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.process.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.rbac.create` | Used to configure RBAC resources creation | +| `agent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | +| `agent.systemProbe.appArmorProfileName` | AppArmorProfileName specify a apparmor profile | +| `agent.systemProbe.bpfDebugEnabled` | BPFDebugEnabled logging for kernel debug | +| `agent.systemProbe.conntrackEnabled` | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | +| `agent.systemProbe.debugPort` | DebugPort Specify the port to expose pprof and expvar for system-probe agent | +| `agent.systemProbe.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| `agent.systemProbe.env` | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.systemProbe.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.systemProbe.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.systemProbe.secCompCustomProfileConfigMap` | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | +| `agent.systemProbe.secCompProfileName` | SecCompProfileName specify a seccomp profile | +| `agent.systemProbe.secCompRootPath` | SecCompRootPath specify the seccomp profile root directory | +| `agent.systemProbe.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| `agent.systemProbe.securityContext.capabilities.add` | Added capabilities | +| `agent.systemProbe.securityContext.capabilities.drop` | Removed capabilities | +| `agent.systemProbe.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| `agent.systemProbe.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| `agent.systemProbe.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | +| `agent.systemProbe.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.systemProbe.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.systemProbe.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.systemProbe.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | +| `agent.systemProbe.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | +| `agent.systemProbe.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | +| `agent.systemProbe.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | +| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.systemProbe.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| `agent.useExtendedDaemonset` | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | +| `clusterAgent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | +| `clusterAgent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| `clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| `clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | +| `clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterAgent.config.admissionController.enabled` | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | +| `clusterAgent.config.admissionController.mutateUnlabelled` | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | +| `clusterAgent.config.admissionController.serviceName` | ServiceName corresponds to the webhook service name | +| `clusterAgent.config.clusterChecksEnabled` | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | +| `clusterAgent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | +| `clusterAgent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `clusterAgent.config.externalMetrics.enabled` | Enable the metricsProvider to be able to scale based on metrics in Datadog | +| `clusterAgent.config.externalMetrics.port` | If specified configures the metricsProvider external metrics service port | +| `clusterAgent.config.externalMetrics.useDatadogMetrics` | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | +| `clusterAgent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| `clusterAgent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterAgent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterAgent.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Agent container | +| `clusterAgent.config.volumes` | Specify additional volumes in the Datadog Cluster Agent container | +| `clusterAgent.customConfig.configData` | ConfigData corresponds to the configuration file content | +| `clusterAgent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| `clusterAgent.customConfig.configMap.name` | Name the ConfigMap name | +| `clusterAgent.deploymentName` | Name of the Cluster Agent Deployment to create or migrate from | +| `clusterAgent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| `clusterAgent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| `clusterAgent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `clusterAgent.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| `clusterAgent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| `clusterAgent.rbac.create` | Used to configure RBAC resources creation | +| `clusterAgent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | +| `clusterAgent.replicas` | Number of the Cluster Agent replicas | +| `clusterAgent.tolerations` | If specified, the Cluster-Agent pod's tolerations. | +| `clusterChecksRunner.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | +| `clusterChecksRunner.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| `clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| `clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | +| `clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterChecksRunner.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `clusterChecksRunner.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| `clusterChecksRunner.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterChecksRunner.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterChecksRunner.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Check Runner container | +| `clusterChecksRunner.config.volumes` | Specify additional volumes in the Datadog Cluster Check Runner container | +| `clusterChecksRunner.customConfig.configData` | ConfigData corresponds to the configuration file content | +| `clusterChecksRunner.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| `clusterChecksRunner.customConfig.configMap.name` | Name the ConfigMap name | +| `clusterChecksRunner.deploymentName` | Name of the cluster checks deployment to create or migrate from | +| `clusterChecksRunner.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| `clusterChecksRunner.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| `clusterChecksRunner.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `clusterChecksRunner.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| `clusterChecksRunner.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| `clusterChecksRunner.rbac.create` | Used to configure RBAC resources creation | +| `clusterChecksRunner.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | +| `clusterChecksRunner.replicas` | Number of the Cluster Agent replicas | +| `clusterChecksRunner.tolerations` | If specified, the Cluster-Checks pod's tolerations. | +| `clusterName` | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | +| `credentials.apiKey` | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | +| `credentials.apiKeyExistingSecret` | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | +| `credentials.apiSecret.keyName` | KeyName is the key of the secret to use | +| `credentials.apiSecret.secretName` | SecretName is the name of the secret | +| `credentials.appKey` | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | +| `credentials.appKeyExistingSecret` | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | +| `credentials.appSecret.keyName` | KeyName is the key of the secret to use | +| `credentials.appSecret.secretName` | SecretName is the name of the secret | +| `credentials.token` | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | +| `credentials.useSecretBackend` | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If `useSecretBackend: true`, other credential parameters will be ignored. default value is false. | +| `site` | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | \ No newline at end of file From a5ae44f616310ab5d61660fdbbcdde0331b2f7ea Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 10:48:23 -0700 Subject: [PATCH 14/26] no wrapping --- .../kubernetes/operator_configuration.md | 367 +++++++++--------- 1 file changed, 184 insertions(+), 183 deletions(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 3615669d45c05..9c6834b80f639 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -24,188 +24,189 @@ spec: name: "datadog/agent:latest" ``` + | Parameter | Description | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `agent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | -| `agent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | -| `agent.apm.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.config.checksd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | -| `agent.config.collectEvents` | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | -| `agent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | -| `agent.config.criSocket.criSocketPath` | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | -| `agent.config.criSocket.dockerSocketPath` | Path to the docker runtime socket | -| `agent.config.ddUrl` | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | -| `agent.config.dogstatsd.dogstatsdOriginDetection` | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | -| `agent.config.dogstatsd.useDogStatsDSocketVolume` | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | -| `agent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.config.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| `agent.config.leaderElection` | Enables leader election mechanism for event collection. | -| `agent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| `agent.config.podAnnotationsAsTags` | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | -| `agent.config.podLabelsAsTags` | Provide a mapping of Kubernetes Labels to Datadog Tags. : | -| `agent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.config.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| `agent.config.securityContext.capabilities.add` | Added capabilities | -| `agent.config.securityContext.capabilities.drop` | Removed capabilities | -| `agent.config.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| `agent.config.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| `agent.config.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | -| `agent.config.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.config.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.config.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.config.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | -| `agent.config.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | -| `agent.config.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | -| `agent.config.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | -| `agent.config.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.config.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.config.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| `agent.config.tags` | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | -| `agent.config.tolerations` | If specified, the Agent pod's tolerations. | -| `agent.config.volumeMounts` | Specify additional volume mounts in the Datadog Agent container | -| `agent.config.volumes` | Specify additional volumes in the Datadog Agent container | -| `agent.customConfig.configData` | ConfigData corresponds to the configuration file content | -| `agent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| `agent.customConfig.configMap.name` | Name the ConfigMap name | -| `agent.daemonsetName` | Name of the Daemonset to create or migrate from | -| `agent.deploymentStrategy.canary.duration` | | -| `agent.deploymentStrategy.canary.paused` | | -| `agent.deploymentStrategy.canary.replicas` | | -| `agent.deploymentStrategy.reconcileFrequency` | The reconcile frequency of the ExtendDaemonSet | -| `agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation` | The maxium number of pods created in parallel. Default value is 250. | -| `agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure` | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | -| `agent.deploymentStrategy.rollingUpdate.maxUnavailable` | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | -| `agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease` | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | -| `agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration` | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | -| `agent.deploymentStrategy.updateStrategyType` | The update strategy used for the DaemonSet | -| `agent.dnsConfig.nameservers` | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | -| `agent.dnsConfig.options` | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | -| `agent.dnsConfig.searches` | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | -| `agent.dnsPolicy` | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | -| `agent.env` | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.hostNetwork` | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | -| `agent.hostPID` | Use the host's pid namespace. Optional: Default to false. | -| `agent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| `agent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| `agent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| `agent.log.containerCollectUsingFiles` | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | -| `agent.log.containerLogsPath` | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to `/var/lib/docker/containers` | -| `agent.log.enabled` | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| `agent.log.logsConfigContainerCollectAll` | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| `agent.log.openFilesLimit` | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | -| `agent.log.podLogsPath` | This to allow log collection from pod log path. Default to `/var/log/pods` | -| `agent.log.tempStoragePath` | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to `/var/lib/datadog-agent/logs` | -| `agent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| `agent.process.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| `agent.process.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.process.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.process.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.rbac.create` | Used to configure RBAC resources creation | -| `agent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | -| `agent.systemProbe.appArmorProfileName` | AppArmorProfileName specify a apparmor profile | -| `agent.systemProbe.bpfDebugEnabled` | BPFDebugEnabled logging for kernel debug | -| `agent.systemProbe.conntrackEnabled` | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | -| `agent.systemProbe.debugPort` | DebugPort Specify the port to expose pprof and expvar for system-probe agent | -| `agent.systemProbe.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| `agent.systemProbe.env` | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.systemProbe.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.systemProbe.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.systemProbe.secCompCustomProfileConfigMap` | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | -| `agent.systemProbe.secCompProfileName` | SecCompProfileName specify a seccomp profile | -| `agent.systemProbe.secCompRootPath` | SecCompRootPath specify the seccomp profile root directory | -| `agent.systemProbe.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| `agent.systemProbe.securityContext.capabilities.add` | Added capabilities | -| `agent.systemProbe.securityContext.capabilities.drop` | Removed capabilities | -| `agent.systemProbe.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| `agent.systemProbe.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| `agent.systemProbe.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | -| `agent.systemProbe.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.systemProbe.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.systemProbe.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.systemProbe.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | -| `agent.systemProbe.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | -| `agent.systemProbe.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | -| `agent.systemProbe.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | -| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.systemProbe.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| `agent.useExtendedDaemonset` | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | -| `clusterAgent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | -| `clusterAgent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| `clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| `clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | -| `clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterAgent.config.admissionController.enabled` | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | -| `clusterAgent.config.admissionController.mutateUnlabelled` | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | -| `clusterAgent.config.admissionController.serviceName` | ServiceName corresponds to the webhook service name | -| `clusterAgent.config.clusterChecksEnabled` | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | -| `clusterAgent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | -| `clusterAgent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `clusterAgent.config.externalMetrics.enabled` | Enable the metricsProvider to be able to scale based on metrics in Datadog | -| `clusterAgent.config.externalMetrics.port` | If specified configures the metricsProvider external metrics service port | -| `clusterAgent.config.externalMetrics.useDatadogMetrics` | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | -| `clusterAgent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| `clusterAgent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterAgent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterAgent.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Agent container | -| `clusterAgent.config.volumes` | Specify additional volumes in the Datadog Cluster Agent container | -| `clusterAgent.customConfig.configData` | ConfigData corresponds to the configuration file content | -| `clusterAgent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| `clusterAgent.customConfig.configMap.name` | Name the ConfigMap name | -| `clusterAgent.deploymentName` | Name of the Cluster Agent Deployment to create or migrate from | -| `clusterAgent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| `clusterAgent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| `clusterAgent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| `clusterAgent.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | -| `clusterAgent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| `clusterAgent.rbac.create` | Used to configure RBAC resources creation | -| `clusterAgent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | -| `clusterAgent.replicas` | Number of the Cluster Agent replicas | -| `clusterAgent.tolerations` | If specified, the Cluster-Agent pod's tolerations. | -| `clusterChecksRunner.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | -| `clusterChecksRunner.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| `clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| `clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | -| `clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterChecksRunner.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `clusterChecksRunner.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| `clusterChecksRunner.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterChecksRunner.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterChecksRunner.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Check Runner container | -| `clusterChecksRunner.config.volumes` | Specify additional volumes in the Datadog Cluster Check Runner container | -| `clusterChecksRunner.customConfig.configData` | ConfigData corresponds to the configuration file content | -| `clusterChecksRunner.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| `clusterChecksRunner.customConfig.configMap.name` | Name the ConfigMap name | -| `clusterChecksRunner.deploymentName` | Name of the cluster checks deployment to create or migrate from | -| `clusterChecksRunner.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| `clusterChecksRunner.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| `clusterChecksRunner.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| `clusterChecksRunner.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | -| `clusterChecksRunner.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| `clusterChecksRunner.rbac.create` | Used to configure RBAC resources creation | -| `clusterChecksRunner.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | -| `clusterChecksRunner.replicas` | Number of the Cluster Agent replicas | -| `clusterChecksRunner.tolerations` | If specified, the Cluster-Checks pod's tolerations. | -| `clusterName` | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | -| `credentials.apiKey` | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | -| `credentials.apiKeyExistingSecret` | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | -| `credentials.apiSecret.keyName` | KeyName is the key of the secret to use | -| `credentials.apiSecret.secretName` | SecretName is the name of the secret | -| `credentials.appKey` | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | -| `credentials.appKeyExistingSecret` | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | -| `credentials.appSecret.keyName` | KeyName is the key of the secret to use | -| `credentials.appSecret.secretName` | SecretName is the name of the secret | -| `credentials.token` | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | -| `credentials.useSecretBackend` | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If `useSecretBackend: true`, other credential parameters will be ignored. default value is false. | -| `site` | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | \ No newline at end of file +| agent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | +| agent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| agent.apm.enabled | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | +| agent.apm.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.apm.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.apm.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.apm.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.config.checksd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | +| agent.config.collectEvents | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | +| agent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | +| agent.config.criSocket.criSocketPath | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | +| agent.config.criSocket.dockerSocketPath | Path to the docker runtime socket | +| agent.config.ddUrl | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | +| agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | +| agent.config.dogstatsd.useDogStatsDSocketVolume | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | +| agent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.config.leaderElection | Enables leader election mechanism for event collection. | +| agent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | +| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. : | +| agent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.config.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| agent.config.securityContext.capabilities.add | Added capabilities | +| agent.config.securityContext.capabilities.drop | Removed capabilities | +| agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| agent.config.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | +| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| agent.config.tags | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | +| agent.config.tolerations | If specified, the Agent pod's tolerations. | +| agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container | +| agent.config.volumes | Specify additional volumes in the Datadog Agent container | +| agent.customConfig.configData | ConfigData corresponds to the configuration file content | +| agent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| agent.customConfig.configMap.name | Name the ConfigMap name | +| agent.daemonsetName | Name of the Daemonset to create or migrate from | +| agent.deploymentStrategy.canary.duration | | +| agent.deploymentStrategy.canary.paused | | +| agent.deploymentStrategy.canary.replicas | | +| agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet | +| agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maxium number of pods created in parallel. Default value is 250. | +| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | +| agent.deploymentStrategy.rollingUpdate.maxUnavailable | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | +| agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | +| agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | +| agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet | +| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | +| agent.dnsConfig.options | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | +| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | +| agent.dnsPolicy | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | +| agent.env | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | +| agent.hostPID | Use the host's pid namespace. Optional: Default to false. | +| agent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| agent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| agent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| agent.log.containerCollectUsingFiles | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | +| agent.log.containerLogsPath | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to /var/lib/docker/containers | +| agent.log.enabled | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | +| agent.log.podLogsPath | This to allow log collection from pod log path. Default to /var/log/pods | +| agent.log.tempStoragePath | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to /var/lib/datadog-agent/logs | +| agent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| agent.process.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| agent.process.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.process.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.process.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.rbac.create | Used to configure RBAC resources creation | +| agent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | +| agent.systemProbe.appArmorProfileName | AppArmorProfileName specify a apparmor profile | +| agent.systemProbe.bpfDebugEnabled | BPFDebugEnabled logging for kernel debug | +| agent.systemProbe.conntrackEnabled | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | +| agent.systemProbe.debugPort | DebugPort Specify the port to expose pprof and expvar for system-probe agent | +| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| agent.systemProbe.env | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.systemProbe.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.systemProbe.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.systemProbe.secCompCustomProfileConfigMap | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | +| agent.systemProbe.secCompProfileName | SecCompProfileName specify a seccomp profile | +| agent.systemProbe.secCompRootPath | SecCompRootPath specify the seccomp profile root directory | +| agent.systemProbe.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| agent.systemProbe.securityContext.capabilities.add | Added capabilities | +| agent.systemProbe.securityContext.capabilities.drop | Removed capabilities | +| agent.systemProbe.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| agent.systemProbe.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| agent.systemProbe.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | +| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| agent.useExtendedDaemonset | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | +| clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | +| clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | +| clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterAgent.config.admissionController.enabled | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | +| clusterAgent.config.admissionController.mutateUnlabelled | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | +| clusterAgent.config.admissionController.serviceName | ServiceName corresponds to the webhook service name | +| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | +| clusterAgent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | +| clusterAgent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| clusterAgent.config.externalMetrics.enabled | Enable the metricsProvider to be able to scale based on metrics in Datadog | +| clusterAgent.config.externalMetrics.port | If specified configures the metricsProvider external metrics service port | +| clusterAgent.config.externalMetrics.useDatadogMetrics | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | +| clusterAgent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| clusterAgent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterAgent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterAgent.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Agent container | +| clusterAgent.config.volumes | Specify additional volumes in the Datadog Cluster Agent container | +| clusterAgent.customConfig.configData | ConfigData corresponds to the configuration file content | +| clusterAgent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| clusterAgent.customConfig.configMap.name | Name the ConfigMap name | +| clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from | +| clusterAgent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| clusterAgent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| clusterAgent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| clusterAgent.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| clusterAgent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| clusterAgent.rbac.create | Used to configure RBAC resources creation | +| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | +| clusterAgent.replicas | Number of the Cluster Agent replicas | +| clusterAgent.tolerations | If specified, the Cluster-Agent pod's tolerations. | +| clusterChecksRunner.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | +| clusterChecksRunner.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | +| clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterChecksRunner.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| clusterChecksRunner.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| clusterChecksRunner.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterChecksRunner.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterChecksRunner.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Check Runner container | +| clusterChecksRunner.config.volumes | Specify additional volumes in the Datadog Cluster Check Runner container | +| clusterChecksRunner.customConfig.configData | ConfigData corresponds to the configuration file content | +| clusterChecksRunner.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| clusterChecksRunner.customConfig.configMap.name | Name the ConfigMap name | +| clusterChecksRunner.deploymentName | Name of the cluster checks deployment to create or migrate from | +| clusterChecksRunner.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| clusterChecksRunner.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| clusterChecksRunner.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| clusterChecksRunner.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| clusterChecksRunner.rbac.create | Used to configure RBAC resources creation | +| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | +| clusterChecksRunner.replicas | Number of the Cluster Agent replicas | +| clusterChecksRunner.tolerations | If specified, the Cluster-Checks pod's tolerations. | +| clusterName | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | +| credentials.apiKey | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | +| credentials.apiKeyExistingSecret | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | +| credentials.apiSecret.keyName | KeyName is the key of the secret to use | +| credentials.apiSecret.secretName | SecretName is the name of the secret | +| credentials.appKey | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | +| credentials.appKeyExistingSecret | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | +| credentials.appSecret.keyName | KeyName is the key of the secret to use | +| credentials.appSecret.secretName | SecretName is the name of the secret | +| credentials.token | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | +| credentials.useSecretBackend | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If useSecretBackend: true, other credential parameters will be ignored. default value is false. | +| site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | \ No newline at end of file From 4fec437614727b81ed1032beae02d83981061581 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 11:02:24 -0700 Subject: [PATCH 15/26] trying shortcode --- content/en/agent/kubernetes/operator_configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 9c6834b80f639..5d6f2765cd07c 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -139,7 +139,7 @@ spec: | clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | | clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | | clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | +| {{< code-block wrap="false">}}clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms {{< /code-block >}} | Required. A list of node selector terms. The terms are ORed. | | clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | From 1a82e33c88312e002fba706268a7aa4d52112c52 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 11:09:40 -0700 Subject: [PATCH 16/26] trying again --- content/en/agent/kubernetes/operator_configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 5d6f2765cd07c..f84c1fa67d07e 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -139,7 +139,7 @@ spec: | clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | | clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | | clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| {{< code-block wrap="false">}}clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms {{< /code-block >}} | Required. A list of node selector terms. The terms are ORed. | +| {{< code-block lamg="bash" wrap="false">}}clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms {{< /code-block >}} | Required. A list of node selector terms. The terms are ORed. | | clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | From de464dc52c67803d8c8aa94f36d873071c11ab3f Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 11:18:40 -0700 Subject: [PATCH 17/26] i am literally an idiot --- content/en/agent/kubernetes/operator_configuration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index f84c1fa67d07e..49703a2e3a68e 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -139,7 +139,7 @@ spec: | clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | | clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | | clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| {{< code-block lamg="bash" wrap="false">}}clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms {{< /code-block >}} | Required. A list of node selector terms. The terms are ORed. | +| {{< code-block lang="bash" wrap="false">}}clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms {{< /code-block >}} | Required. A list of node selector terms. The terms are ORed. | | clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | From 5869de3d2ad8609e81c73ab368726278c0f845ef Mon Sep 17 00:00:00 2001 From: zbayoff Date: Wed, 30 Sep 2020 15:46:15 -0400 Subject: [PATCH 18/26] test break-word on table --- content/en/agent/kubernetes/operator_configuration.md | 8 +++++--- layouts/shortcodes/table.html | 3 +-- src/styles/pages/_global.scss | 9 +++++++++ 3 files changed, 15 insertions(+), 5 deletions(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 49703a2e3a68e..510a436b969a5 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -24,7 +24,7 @@ spec: name: "datadog/agent:latest" ``` - +{{< table table-type="break-word" >}} | Parameter | Description | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | agent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | @@ -139,7 +139,7 @@ spec: | clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | | clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | | clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| {{< code-block lang="bash" wrap="false">}}clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms {{< /code-block >}} | Required. A list of node selector terms. The terms are ORed. | +| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | | clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | @@ -209,4 +209,6 @@ spec: | credentials.appSecret.secretName | SecretName is the name of the secret | | credentials.token | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | | credentials.useSecretBackend | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If useSecretBackend: true, other credential parameters will be ignored. default value is false. | -| site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | \ No newline at end of file +| site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | + +{{< /table >}} \ No newline at end of file diff --git a/layouts/shortcodes/table.html b/layouts/shortcodes/table.html index 11e0d892fe09b..4e800fb4829e3 100644 --- a/layouts/shortcodes/table.html +++ b/layouts/shortcodes/table.html @@ -1,2 +1 @@ -{{ $_hugo_config := `{ "version": 1 }` }} -
{{- if .Get "responsive" -}}
{{- .Inner -}}
{{- else -}}{{- .Inner -}}{{- end -}}
+
{{- if .Get "responsive" -}}
{{- .Inner -}}
{{- else -}}{{- .Inner | markdownify -}}{{- end -}}
diff --git a/src/styles/pages/_global.scss b/src/styles/pages/_global.scss index 924b93e813916..544062b4d9b52 100644 --- a/src/styles/pages/_global.scss +++ b/src/styles/pages/_global.scss @@ -669,6 +669,15 @@ table, .table { } } +.break-word { + table { + td { + word-break: break-word; + } + } + +} + // external link logos .link-logo { From a013073bc4ee85b212c1e0ee6c0af9ca2d96e52c Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 13:22:42 -0700 Subject: [PATCH 19/26] one last formatting push --- .../kubernetes/operator_configuration.md | 366 +++++++++--------- 1 file changed, 183 insertions(+), 183 deletions(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 510a436b969a5..48ca2b2298606 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -27,188 +27,188 @@ spec: {{< table table-type="break-word" >}} | Parameter | Description | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| agent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | -| agent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| agent.apm.enabled | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | -| agent.apm.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.apm.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| agent.apm.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.apm.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.config.checksd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | -| agent.config.collectEvents | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | -| agent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | -| agent.config.criSocket.criSocketPath | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | -| agent.config.criSocket.dockerSocketPath | Path to the docker runtime socket | -| agent.config.ddUrl | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | -| agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | -| agent.config.dogstatsd.useDogStatsDSocketVolume | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | -| agent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| agent.config.leaderElection | Enables leader election mechanism for event collection. | -| agent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | -| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. : | -| agent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.config.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| agent.config.securityContext.capabilities.add | Added capabilities | -| agent.config.securityContext.capabilities.drop | Removed capabilities | -| agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| agent.config.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | -| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | -| agent.config.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | -| agent.config.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | -| agent.config.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | -| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.config.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| agent.config.tags | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | -| agent.config.tolerations | If specified, the Agent pod's tolerations. | -| agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container | -| agent.config.volumes | Specify additional volumes in the Datadog Agent container | -| agent.customConfig.configData | ConfigData corresponds to the configuration file content | -| agent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| agent.customConfig.configMap.name | Name the ConfigMap name | -| agent.daemonsetName | Name of the Daemonset to create or migrate from | -| agent.deploymentStrategy.canary.duration | | -| agent.deploymentStrategy.canary.paused | | -| agent.deploymentStrategy.canary.replicas | | -| agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet | -| agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maxium number of pods created in parallel. Default value is 250. | -| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | -| agent.deploymentStrategy.rollingUpdate.maxUnavailable | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | -| agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | -| agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | -| agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet | -| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | -| agent.dnsConfig.options | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | -| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | -| agent.dnsPolicy | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | -| agent.env | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | -| agent.hostPID | Use the host's pid namespace. Optional: Default to false. | -| agent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| agent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| agent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| agent.log.containerCollectUsingFiles | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | -| agent.log.containerLogsPath | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to /var/lib/docker/containers | -| agent.log.enabled | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | -| agent.log.podLogsPath | This to allow log collection from pod log path. Default to /var/log/pods | -| agent.log.tempStoragePath | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to /var/lib/datadog-agent/logs | -| agent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| agent.process.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| agent.process.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.process.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.process.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.rbac.create | Used to configure RBAC resources creation | -| agent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | -| agent.systemProbe.appArmorProfileName | AppArmorProfileName specify a apparmor profile | -| agent.systemProbe.bpfDebugEnabled | BPFDebugEnabled logging for kernel debug | -| agent.systemProbe.conntrackEnabled | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | -| agent.systemProbe.debugPort | DebugPort Specify the port to expose pprof and expvar for system-probe agent | -| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| agent.systemProbe.env | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.systemProbe.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.systemProbe.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.systemProbe.secCompCustomProfileConfigMap | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | -| agent.systemProbe.secCompProfileName | SecCompProfileName specify a seccomp profile | -| agent.systemProbe.secCompRootPath | SecCompRootPath specify the seccomp profile root directory | -| agent.systemProbe.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| agent.systemProbe.securityContext.capabilities.add | Added capabilities | -| agent.systemProbe.securityContext.capabilities.drop | Removed capabilities | -| agent.systemProbe.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| agent.systemProbe.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| agent.systemProbe.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | -| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | -| agent.systemProbe.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | -| agent.systemProbe.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | -| agent.systemProbe.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | -| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.systemProbe.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| agent.useExtendedDaemonset | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | -| clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | -| clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | -| clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterAgent.config.admissionController.enabled | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | -| clusterAgent.config.admissionController.mutateUnlabelled | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | -| clusterAgent.config.admissionController.serviceName | ServiceName corresponds to the webhook service name | -| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | -| clusterAgent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | -| clusterAgent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| clusterAgent.config.externalMetrics.enabled | Enable the metricsProvider to be able to scale based on metrics in Datadog | -| clusterAgent.config.externalMetrics.port | If specified configures the metricsProvider external metrics service port | -| clusterAgent.config.externalMetrics.useDatadogMetrics | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | -| clusterAgent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| clusterAgent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterAgent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterAgent.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Agent container | -| clusterAgent.config.volumes | Specify additional volumes in the Datadog Cluster Agent container | -| clusterAgent.customConfig.configData | ConfigData corresponds to the configuration file content | -| clusterAgent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| clusterAgent.customConfig.configMap.name | Name the ConfigMap name | -| clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from | -| clusterAgent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| clusterAgent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| clusterAgent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| clusterAgent.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | -| clusterAgent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| clusterAgent.rbac.create | Used to configure RBAC resources creation | -| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | -| clusterAgent.replicas | Number of the Cluster Agent replicas | -| clusterAgent.tolerations | If specified, the Cluster-Agent pod's tolerations. | -| clusterChecksRunner.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | -| clusterChecksRunner.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | -| clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterChecksRunner.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| clusterChecksRunner.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| clusterChecksRunner.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterChecksRunner.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterChecksRunner.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Check Runner container | -| clusterChecksRunner.config.volumes | Specify additional volumes in the Datadog Cluster Check Runner container | -| clusterChecksRunner.customConfig.configData | ConfigData corresponds to the configuration file content | -| clusterChecksRunner.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| clusterChecksRunner.customConfig.configMap.name | Name the ConfigMap name | -| clusterChecksRunner.deploymentName | Name of the cluster checks deployment to create or migrate from | -| clusterChecksRunner.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| clusterChecksRunner.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| clusterChecksRunner.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | -| clusterChecksRunner.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| clusterChecksRunner.rbac.create | Used to configure RBAC resources creation | -| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | -| clusterChecksRunner.replicas | Number of the Cluster Agent replicas | -| clusterChecksRunner.tolerations | If specified, the Cluster-Checks pod's tolerations. | -| clusterName | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | -| credentials.apiKey | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | -| credentials.apiKeyExistingSecret | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | -| credentials.apiSecret.keyName | KeyName is the key of the secret to use | -| credentials.apiSecret.secretName | SecretName is the name of the secret | -| credentials.appKey | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | -| credentials.appKeyExistingSecret | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | -| credentials.appSecret.keyName | KeyName is the key of the secret to use | -| credentials.appSecret.secretName | SecretName is the name of the secret | -| credentials.token | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | -| credentials.useSecretBackend | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If useSecretBackend: true, other credential parameters will be ignored. default value is false. | -| site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | +| `agent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | +| `agent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | +| `agent.apm.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.config.checksd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | +| `agent.config.collectEvents` | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | +| `agent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | +| `agent.config.criSocket.criSocketPath` | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | +| `agent.config.criSocket.dockerSocketPath` | Path to the docker runtime socket | +| `agent.config.ddUrl` | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | +| `agent.config.dogstatsd.dogstatsdOriginDetection` | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | +| `agent.config.dogstatsd.useDogStatsDSocketVolume` | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | +| `agent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.config.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| `agent.config.leaderElection` | Enables leader election mechanism for event collection. | +| `agent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| `agent.config.podAnnotationsAsTags` | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | +| `agent.config.podLabelsAsTags` | Provide a mapping of Kubernetes Labels to Datadog Tags. : | +| `agent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.config.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| `agent.config.securityContext.capabilities.add` | Added capabilities | +| `agent.config.securityContext.capabilities.drop` | Removed capabilities | +| `agent.config.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| `agent.config.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| `agent.config.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | +| `agent.config.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.config.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.config.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.config.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | +| `agent.config.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | +| `agent.config.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | +| `agent.config.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | +| `agent.config.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.config.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.config.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| `agent.config.tags` | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | +| `agent.config.tolerations` | If specified, the Agent pod's tolerations. | +| `agent.config.volumeMounts` | Specify additional volume mounts in the Datadog Agent container | +| `agent.config.volumes` | Specify additional volumes in the Datadog Agent container | +| `agent.customConfig.configData` | ConfigData corresponds to the configuration file content | +| `agent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| `agent.customConfig.configMap.name` | Name the ConfigMap name | +| `agent.daemonsetName` | Name of the Daemonset to create or migrate from | +| `agent.deploymentStrategy.canary.duration` | | +| `agent.deploymentStrategy.canary.paused` | | +| `agent.deploymentStrategy.canary.replicas` | | +| `agent.deploymentStrategy.reconcileFrequency` | The reconcile frequency of the ExtendDaemonSet | +| `agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation` | The maxium number of pods created in parallel. Default value is 250. | +| `agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure` | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | +| `agent.deploymentStrategy.rollingUpdate.maxUnavailable` | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | +| `agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease` | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | +| `agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration` | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | +| `agent.deploymentStrategy.updateStrategyType` | The update strategy used for the DaemonSet | +| `agent.dnsConfig.nameservers` | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | +| `agent.dnsConfig.options` | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | +| `agent.dnsConfig.searches` | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | +| `agent.dnsPolicy` | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | +| `agent.env` | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.hostNetwork` | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | +| `agent.hostPID` | Use the host's pid namespace. Optional: Default to false. | +| `agent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| `agent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| `agent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `agent.log.containerCollectUsingFiles` | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | +| `agent.log.containerLogsPath` | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to `/var/lib/docker/containers` | +| `agent.log.enabled` | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| `agent.log.logsConfigContainerCollectAll` | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| `agent.log.openFilesLimit` | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | +| `agent.log.podLogsPath` | This to allow log collection from pod log path. Default to `/var/log/pods` | +| `agent.log.tempStoragePath` | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to `/var/lib/datadog-agent/logs` | +| `agent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| `agent.process.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| `agent.process.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.process.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.process.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.rbac.create` | Used to configure RBAC resources creation | +| `agent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | +| `agent.systemProbe.appArmorProfileName` | AppArmorProfileName specify a apparmor profile | +| `agent.systemProbe.bpfDebugEnabled` | BPFDebugEnabled logging for kernel debug | +| `agent.systemProbe.conntrackEnabled` | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | +| `agent.systemProbe.debugPort` | DebugPort Specify the port to expose pprof and expvar for system-probe agent | +| `agent.systemProbe.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| `agent.systemProbe.env` | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.systemProbe.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.systemProbe.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `agent.systemProbe.secCompCustomProfileConfigMap` | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | +| `agent.systemProbe.secCompProfileName` | SecCompProfileName specify a seccomp profile | +| `agent.systemProbe.secCompRootPath` | SecCompRootPath specify the seccomp profile root directory | +| `agent.systemProbe.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| `agent.systemProbe.securityContext.capabilities.add` | Added capabilities | +| `agent.systemProbe.securityContext.capabilities.drop` | Removed capabilities | +| `agent.systemProbe.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| `agent.systemProbe.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| `agent.systemProbe.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | +| `agent.systemProbe.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.systemProbe.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.systemProbe.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| `agent.systemProbe.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | +| `agent.systemProbe.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | +| `agent.systemProbe.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | +| `agent.systemProbe.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | +| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| `agent.systemProbe.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| `agent.useExtendedDaemonset` | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | +| `clusterAgent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | +| `clusterAgent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| `clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| `clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | +| `clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterAgent.config.admissionController.enabled` | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | +| `clusterAgent.config.admissionController.mutateUnlabelled` | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | +| `clusterAgent.config.admissionController.serviceName` | ServiceName corresponds to the webhook service name | +| `clusterAgent.config.clusterChecksEnabled` | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | +| `clusterAgent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | +| `clusterAgent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `clusterAgent.config.externalMetrics.enabled` | Enable the metricsProvider to be able to scale based on metrics in Datadog | +| `clusterAgent.config.externalMetrics.port` | If specified configures the metricsProvider external metrics service port | +| `clusterAgent.config.externalMetrics.useDatadogMetrics` | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | +| `clusterAgent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| `clusterAgent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterAgent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterAgent.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Agent container | +| `clusterAgent.config.volumes` | Specify additional volumes in the Datadog Cluster Agent container | +| `clusterAgent.customConfig.configData` | ConfigData corresponds to the configuration file content | +| `clusterAgent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| `clusterAgent.customConfig.configMap.name` | Name the ConfigMap name | +| `clusterAgent.deploymentName` | Name of the Cluster Agent Deployment to create or migrate from | +| `clusterAgent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| `clusterAgent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| `clusterAgent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `clusterAgent.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| `clusterAgent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| `clusterAgent.rbac.create` | Used to configure RBAC resources creation | +| `clusterAgent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | +| `clusterAgent.replicas` | Number of the Cluster Agent replicas | +| `clusterAgent.tolerations` | If specified, the Cluster-Agent pod's tolerations. | +| `clusterChecksRunner.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | +| `clusterChecksRunner.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| `clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| `clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | +| `clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| `clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| `clusterChecksRunner.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `clusterChecksRunner.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| `clusterChecksRunner.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterChecksRunner.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| `clusterChecksRunner.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Check Runner container | +| `clusterChecksRunner.config.volumes` | Specify additional volumes in the Datadog Cluster Check Runner container | +| `clusterChecksRunner.customConfig.configData` | ConfigData corresponds to the configuration file content | +| `clusterChecksRunner.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| `clusterChecksRunner.customConfig.configMap.name` | Name the ConfigMap name | +| `clusterChecksRunner.deploymentName` | Name of the cluster checks deployment to create or migrate from | +| `clusterChecksRunner.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| `clusterChecksRunner.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| `clusterChecksRunner.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| `clusterChecksRunner.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| `clusterChecksRunner.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| `clusterChecksRunner.rbac.create` | Used to configure RBAC resources creation | +| `clusterChecksRunner.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | +| `clusterChecksRunner.replicas` | Number of the Cluster Agent replicas | +| `clusterChecksRunner.tolerations` | If specified, the Cluster-Checks pod's tolerations. | +| `clusterName` | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | +| `credentials.apiKey` | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | +| `credentials.apiKeyExistingSecret` | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | +| `credentials.apiSecret.keyName` | KeyName is the key of the secret to use | +| `credentials.apiSecret.secretName` | SecretName is the name of the secret | +| `credentials.appKey` | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | +| `credentials.appKeyExistingSecret` | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | +| `credentials.appSecret.keyName` | KeyName is the key of the secret to use | +| `credentials.appSecret.secretName` | SecretName is the name of the secret | +| `credentials.token` | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | +| `credentials.useSecretBackend` | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If `useSecretBackend: true`, other credential parameters will be ignored. default value is false. | +| `site` | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | {{< /table >}} \ No newline at end of file From 911ff7ab10c4166a2074dfcd02bd62d580cebc05 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 13:31:35 -0700 Subject: [PATCH 20/26] undoing that --- .../kubernetes/operator_configuration.md | 366 +++++++++--------- 1 file changed, 183 insertions(+), 183 deletions(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 48ca2b2298606..510a436b969a5 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -27,188 +27,188 @@ spec: {{< table table-type="break-word" >}} | Parameter | Description | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `agent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | -| `agent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | -| `agent.apm.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.config.checksd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | -| `agent.config.collectEvents` | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | -| `agent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | -| `agent.config.criSocket.criSocketPath` | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | -| `agent.config.criSocket.dockerSocketPath` | Path to the docker runtime socket | -| `agent.config.ddUrl` | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | -| `agent.config.dogstatsd.dogstatsdOriginDetection` | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | -| `agent.config.dogstatsd.useDogStatsDSocketVolume` | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | -| `agent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.config.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| `agent.config.leaderElection` | Enables leader election mechanism for event collection. | -| `agent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| `agent.config.podAnnotationsAsTags` | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | -| `agent.config.podLabelsAsTags` | Provide a mapping of Kubernetes Labels to Datadog Tags. : | -| `agent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.config.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| `agent.config.securityContext.capabilities.add` | Added capabilities | -| `agent.config.securityContext.capabilities.drop` | Removed capabilities | -| `agent.config.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| `agent.config.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| `agent.config.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | -| `agent.config.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.config.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.config.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.config.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | -| `agent.config.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | -| `agent.config.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | -| `agent.config.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | -| `agent.config.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.config.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.config.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| `agent.config.tags` | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | -| `agent.config.tolerations` | If specified, the Agent pod's tolerations. | -| `agent.config.volumeMounts` | Specify additional volume mounts in the Datadog Agent container | -| `agent.config.volumes` | Specify additional volumes in the Datadog Agent container | -| `agent.customConfig.configData` | ConfigData corresponds to the configuration file content | -| `agent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| `agent.customConfig.configMap.name` | Name the ConfigMap name | -| `agent.daemonsetName` | Name of the Daemonset to create or migrate from | -| `agent.deploymentStrategy.canary.duration` | | -| `agent.deploymentStrategy.canary.paused` | | -| `agent.deploymentStrategy.canary.replicas` | | -| `agent.deploymentStrategy.reconcileFrequency` | The reconcile frequency of the ExtendDaemonSet | -| `agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation` | The maxium number of pods created in parallel. Default value is 250. | -| `agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure` | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | -| `agent.deploymentStrategy.rollingUpdate.maxUnavailable` | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | -| `agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease` | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | -| `agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration` | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | -| `agent.deploymentStrategy.updateStrategyType` | The update strategy used for the DaemonSet | -| `agent.dnsConfig.nameservers` | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | -| `agent.dnsConfig.options` | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | -| `agent.dnsConfig.searches` | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | -| `agent.dnsPolicy` | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | -| `agent.env` | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.hostNetwork` | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | -| `agent.hostPID` | Use the host's pid namespace. Optional: Default to false. | -| `agent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| `agent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| `agent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| `agent.log.containerCollectUsingFiles` | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | -| `agent.log.containerLogsPath` | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to `/var/lib/docker/containers` | -| `agent.log.enabled` | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| `agent.log.logsConfigContainerCollectAll` | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| `agent.log.openFilesLimit` | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | -| `agent.log.podLogsPath` | This to allow log collection from pod log path. Default to `/var/log/pods` | -| `agent.log.tempStoragePath` | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to `/var/lib/datadog-agent/logs` | -| `agent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| `agent.process.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| `agent.process.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.process.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.process.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.rbac.create` | Used to configure RBAC resources creation | -| `agent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | -| `agent.systemProbe.appArmorProfileName` | AppArmorProfileName specify a apparmor profile | -| `agent.systemProbe.bpfDebugEnabled` | BPFDebugEnabled logging for kernel debug | -| `agent.systemProbe.conntrackEnabled` | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | -| `agent.systemProbe.debugPort` | DebugPort Specify the port to expose pprof and expvar for system-probe agent | -| `agent.systemProbe.enabled` | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| `agent.systemProbe.env` | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `agent.systemProbe.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.systemProbe.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.systemProbe.secCompCustomProfileConfigMap` | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | -| `agent.systemProbe.secCompProfileName` | SecCompProfileName specify a seccomp profile | -| `agent.systemProbe.secCompRootPath` | SecCompRootPath specify the seccomp profile root directory | -| `agent.systemProbe.securityContext.allowPrivilegeEscalation` | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| `agent.systemProbe.securityContext.capabilities.add` | Added capabilities | -| `agent.systemProbe.securityContext.capabilities.drop` | Removed capabilities | -| `agent.systemProbe.securityContext.privileged` | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| `agent.systemProbe.securityContext.procMount` | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| `agent.systemProbe.securityContext.readOnlyRootFilesystem` | Whether this container has a read-only root filesystem. Default is false. | -| `agent.systemProbe.securityContext.runAsGroup` | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.systemProbe.securityContext.runAsNonRoot` | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.systemProbe.securityContext.runAsUser` | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| `agent.systemProbe.securityContext.seLinuxOptions.level` | Level is SELinux level label that applies to the container. | -| `agent.systemProbe.securityContext.seLinuxOptions.role` | Role is a SELinux role label that applies to the container. | -| `agent.systemProbe.securityContext.seLinuxOptions.type` | Type is a SELinux type label that applies to the container. | -| `agent.systemProbe.securityContext.seLinuxOptions.user` | User is a SELinux user label that applies to the container. | -| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec` | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName` | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| `agent.systemProbe.securityContext.windowsOptions.runAsUserName` | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| `agent.useExtendedDaemonset` | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | -| `clusterAgent.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | -| `clusterAgent.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| `clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| `clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | -| `clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterAgent.config.admissionController.enabled` | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | -| `clusterAgent.config.admissionController.mutateUnlabelled` | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | -| `clusterAgent.config.admissionController.serviceName` | ServiceName corresponds to the webhook service name | -| `clusterAgent.config.clusterChecksEnabled` | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | -| `clusterAgent.config.confd.configMapName` | ConfigMapName name of a ConfigMap used to mount a directory | -| `clusterAgent.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `clusterAgent.config.externalMetrics.enabled` | Enable the metricsProvider to be able to scale based on metrics in Datadog | -| `clusterAgent.config.externalMetrics.port` | If specified configures the metricsProvider external metrics service port | -| `clusterAgent.config.externalMetrics.useDatadogMetrics` | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | -| `clusterAgent.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| `clusterAgent.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterAgent.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterAgent.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Agent container | -| `clusterAgent.config.volumes` | Specify additional volumes in the Datadog Cluster Agent container | -| `clusterAgent.customConfig.configData` | ConfigData corresponds to the configuration file content | -| `clusterAgent.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| `clusterAgent.customConfig.configMap.name` | Name the ConfigMap name | -| `clusterAgent.deploymentName` | Name of the Cluster Agent Deployment to create or migrate from | -| `clusterAgent.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| `clusterAgent.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| `clusterAgent.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| `clusterAgent.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | -| `clusterAgent.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| `clusterAgent.rbac.create` | Used to configure RBAC resources creation | -| `clusterAgent.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | -| `clusterAgent.replicas` | Number of the Cluster Agent replicas | -| `clusterAgent.tolerations` | If specified, the Cluster-Agent pod's tolerations. | -| `clusterChecksRunner.additionalAnnotations` | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | -| `clusterChecksRunner.additionalLabels` | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| `clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| `clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms` | Required. A list of node selector terms. The terms are ORed. | -| `clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution` | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| `clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution` | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| `clusterChecksRunner.config.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| `clusterChecksRunner.config.logLevel` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| `clusterChecksRunner.config.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterChecksRunner.config.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `clusterChecksRunner.config.volumeMounts` | Specify additional volume mounts in the Datadog Cluster Check Runner container | -| `clusterChecksRunner.config.volumes` | Specify additional volumes in the Datadog Cluster Check Runner container | -| `clusterChecksRunner.customConfig.configData` | ConfigData corresponds to the configuration file content | -| `clusterChecksRunner.customConfig.configMap.fileKey` | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| `clusterChecksRunner.customConfig.configMap.name` | Name the ConfigMap name | -| `clusterChecksRunner.deploymentName` | Name of the cluster checks deployment to create or migrate from | -| `clusterChecksRunner.image.name` | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| `clusterChecksRunner.image.pullPolicy` | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| `clusterChecksRunner.image.pullSecrets` | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| `clusterChecksRunner.nodeSelector` | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | -| `clusterChecksRunner.priorityClassName` | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| `clusterChecksRunner.rbac.create` | Used to configure RBAC resources creation | -| `clusterChecksRunner.rbac.serviceAccountName` | Used to set up the service account name to use Ignored if the field Create is true | -| `clusterChecksRunner.replicas` | Number of the Cluster Agent replicas | -| `clusterChecksRunner.tolerations` | If specified, the Cluster-Checks pod's tolerations. | -| `clusterName` | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | -| `credentials.apiKey` | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | -| `credentials.apiKeyExistingSecret` | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | -| `credentials.apiSecret.keyName` | KeyName is the key of the secret to use | -| `credentials.apiSecret.secretName` | SecretName is the name of the secret | -| `credentials.appKey` | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | -| `credentials.appKeyExistingSecret` | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | -| `credentials.appSecret.keyName` | KeyName is the key of the secret to use | -| `credentials.appSecret.secretName` | SecretName is the name of the secret | -| `credentials.token` | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | -| `credentials.useSecretBackend` | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If `useSecretBackend: true`, other credential parameters will be ignored. default value is false. | -| `site` | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | +| agent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | +| agent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| agent.apm.enabled | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | +| agent.apm.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.apm.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.apm.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.apm.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.config.checksd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | +| agent.config.collectEvents | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | +| agent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | +| agent.config.criSocket.criSocketPath | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | +| agent.config.criSocket.dockerSocketPath | Path to the docker runtime socket | +| agent.config.ddUrl | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | +| agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | +| agent.config.dogstatsd.useDogStatsDSocketVolume | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | +| agent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.config.leaderElection | Enables leader election mechanism for event collection. | +| agent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | +| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. : | +| agent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.config.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| agent.config.securityContext.capabilities.add | Added capabilities | +| agent.config.securityContext.capabilities.drop | Removed capabilities | +| agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| agent.config.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | +| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| agent.config.tags | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | +| agent.config.tolerations | If specified, the Agent pod's tolerations. | +| agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container | +| agent.config.volumes | Specify additional volumes in the Datadog Agent container | +| agent.customConfig.configData | ConfigData corresponds to the configuration file content | +| agent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| agent.customConfig.configMap.name | Name the ConfigMap name | +| agent.daemonsetName | Name of the Daemonset to create or migrate from | +| agent.deploymentStrategy.canary.duration | | +| agent.deploymentStrategy.canary.paused | | +| agent.deploymentStrategy.canary.replicas | | +| agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet | +| agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maxium number of pods created in parallel. Default value is 250. | +| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | +| agent.deploymentStrategy.rollingUpdate.maxUnavailable | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | +| agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | +| agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | +| agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet | +| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | +| agent.dnsConfig.options | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | +| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | +| agent.dnsPolicy | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | +| agent.env | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | +| agent.hostPID | Use the host's pid namespace. Optional: Default to false. | +| agent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| agent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| agent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| agent.log.containerCollectUsingFiles | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | +| agent.log.containerLogsPath | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to /var/lib/docker/containers | +| agent.log.enabled | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | +| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | +| agent.log.podLogsPath | This to allow log collection from pod log path. Default to /var/log/pods | +| agent.log.tempStoragePath | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to /var/lib/datadog-agent/logs | +| agent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| agent.process.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| agent.process.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.process.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.process.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.rbac.create | Used to configure RBAC resources creation | +| agent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | +| agent.systemProbe.appArmorProfileName | AppArmorProfileName specify a apparmor profile | +| agent.systemProbe.bpfDebugEnabled | BPFDebugEnabled logging for kernel debug | +| agent.systemProbe.conntrackEnabled | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | +| agent.systemProbe.debugPort | DebugPort Specify the port to expose pprof and expvar for system-probe agent | +| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | +| agent.systemProbe.env | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.systemProbe.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.systemProbe.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| agent.systemProbe.secCompCustomProfileConfigMap | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | +| agent.systemProbe.secCompProfileName | SecCompProfileName specify a seccomp profile | +| agent.systemProbe.secCompRootPath | SecCompRootPath specify the seccomp profile root directory | +| agent.systemProbe.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | +| agent.systemProbe.securityContext.capabilities.add | Added capabilities | +| agent.systemProbe.securityContext.capabilities.drop | Removed capabilities | +| agent.systemProbe.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | +| agent.systemProbe.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| agent.systemProbe.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | +| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| agent.useExtendedDaemonset | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | +| clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | +| clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | +| clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterAgent.config.admissionController.enabled | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | +| clusterAgent.config.admissionController.mutateUnlabelled | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | +| clusterAgent.config.admissionController.serviceName | ServiceName corresponds to the webhook service name | +| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | +| clusterAgent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | +| clusterAgent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| clusterAgent.config.externalMetrics.enabled | Enable the metricsProvider to be able to scale based on metrics in Datadog | +| clusterAgent.config.externalMetrics.port | If specified configures the metricsProvider external metrics service port | +| clusterAgent.config.externalMetrics.useDatadogMetrics | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | +| clusterAgent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| clusterAgent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterAgent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterAgent.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Agent container | +| clusterAgent.config.volumes | Specify additional volumes in the Datadog Cluster Agent container | +| clusterAgent.customConfig.configData | ConfigData corresponds to the configuration file content | +| clusterAgent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| clusterAgent.customConfig.configMap.name | Name the ConfigMap name | +| clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from | +| clusterAgent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| clusterAgent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| clusterAgent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| clusterAgent.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| clusterAgent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| clusterAgent.rbac.create | Used to configure RBAC resources creation | +| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | +| clusterAgent.replicas | Number of the Cluster Agent replicas | +| clusterAgent.tolerations | If specified, the Cluster-Agent pod's tolerations. | +| clusterChecksRunner.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | +| clusterChecksRunner.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | +| clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | +| clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | +| clusterChecksRunner.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| clusterChecksRunner.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | +| clusterChecksRunner.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterChecksRunner.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | +| clusterChecksRunner.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Check Runner container | +| clusterChecksRunner.config.volumes | Specify additional volumes in the Datadog Cluster Check Runner container | +| clusterChecksRunner.customConfig.configData | ConfigData corresponds to the configuration file content | +| clusterChecksRunner.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | +| clusterChecksRunner.customConfig.configMap.name | Name the ConfigMap name | +| clusterChecksRunner.deploymentName | Name of the cluster checks deployment to create or migrate from | +| clusterChecksRunner.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | +| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | +| clusterChecksRunner.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | +| clusterChecksRunner.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| clusterChecksRunner.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| clusterChecksRunner.rbac.create | Used to configure RBAC resources creation | +| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | +| clusterChecksRunner.replicas | Number of the Cluster Agent replicas | +| clusterChecksRunner.tolerations | If specified, the Cluster-Checks pod's tolerations. | +| clusterName | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | +| credentials.apiKey | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | +| credentials.apiKeyExistingSecret | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | +| credentials.apiSecret.keyName | KeyName is the key of the secret to use | +| credentials.apiSecret.secretName | SecretName is the name of the secret | +| credentials.appKey | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | +| credentials.appKeyExistingSecret | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | +| credentials.appSecret.keyName | KeyName is the key of the secret to use | +| credentials.appSecret.secretName | SecretName is the name of the secret | +| credentials.token | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | +| credentials.useSecretBackend | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If useSecretBackend: true, other credential parameters will be ignored. default value is false. | +| site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | {{< /table >}} \ No newline at end of file From fa5f6889c603efb4da9ed3dcd40ab666e6bdd9b4 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 14:08:30 -0700 Subject: [PATCH 21/26] more caps --- content/en/agent/kubernetes/_index.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 25aeed4d8a7cc..770366ceace1a 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -180,14 +180,14 @@ To install the Datadog Agent on your Kubernetes cluster: Using the Datadog Operator requires the following prerequisites: -- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, because of limited CRD support, the operator may not work as expected. +- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, because of limited CRD support, the Operator may not work as expected. - [`Helm`][2] for deploying the `datadog-operator`. - [`Kubectl` CLI][3] for installing the `datadog-agent`. -## Deploy an Agent with the operator +## Deploy an Agent with the Operator -To deploy a Datadog Agent with the operator in the minimum number of steps, use the [`datadog-agent-with-operator`][4] Helm chart. +To deploy a Datadog Agent with the Operator in the minimum number of steps, use the [`datadog-agent-with-operator`][4] Helm chart. Here are the steps: 1. [Download the chart][5]: From 3a5ead7e864e88a8ea73b594456c7cd8a1891aed Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 14:09:01 -0700 Subject: [PATCH 22/26] missed some --- content/en/agent/guide/operator-advanced.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/agent/guide/operator-advanced.md b/content/en/agent/guide/operator-advanced.md index 63b29d242a3e2..4033ec4a5e54b 100644 --- a/content/en/agent/guide/operator-advanced.md +++ b/content/en/agent/guide/operator-advanced.md @@ -13,7 +13,7 @@ further_reading: Using the Datadog Operator requires the following prerequisites: -- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, because of limited CRD support, the operator may not work as expected. +- **Kubernetes Cluster version >= v1.14.X**: Tests were done on versions >= `1.14.0`. Still, it should work on versions `>= v1.11.0`. For earlier versions, because of limited CRD support, the Operator may not work as expected. - [`Helm`][2] for deploying the `datadog-operator`. - [`Kubectl` CLI][3] for installing the `datadog-agent`. @@ -31,7 +31,7 @@ To use the Datadog Operator, deploy it in your Kubernetes cluster. Then create a helm install datadog/datadog-operator ``` -## Deploy the Datadog Agents with the operator +## Deploy the Datadog Agents with the Operator After deploying the Datadog Operator, create the `DatadogAgent` resource that triggers the Datadog Agent's deployment in your Kubernetes cluster. By creating this resource in the `Datadog-Operator` namespace, the Agent is deployed as a `DaemonSet` on every `Node` of your cluster. From 395f4417d931906d99e4c3d3f6dc0e345bbf8e01 Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Wed, 30 Sep 2020 15:13:59 -0700 Subject: [PATCH 23/26] table cleanup --- content/en/agent/kubernetes/apm.md | 12 +- .../kubernetes/operator_configuration.md | 288 +++++++++--------- 2 files changed, 160 insertions(+), 140 deletions(-) diff --git a/content/en/agent/kubernetes/apm.md b/content/en/agent/kubernetes/apm.md index ac52137e9efdd..4a97bd0c0dc87 100644 --- a/content/en/agent/kubernetes/apm.md +++ b/content/en/agent/kubernetes/apm.md @@ -159,17 +159,18 @@ List of all environment variables available for tracing within the Agent running ### Operator environment variables | Environment variable | Description | | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | -| `agent.apm.env` | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126. See the [Datadog Docker documentation][8]. | +| `agent.apm.env` | The Datadog Agent supports many environment variables. See the [Docker environment variables documentation][9]. | | `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | | +| `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. For more info, see the [Kubernetes documentation][10]. | +| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. For more info, see the [Kubernetes documentation][10]. | | ## Further Reading {{< partial name="whats-next/whats-next.html" >}} + [1]: /agent/kubernetes/ [2]: /agent/cluster_agent/admission_controller/ [3]: /tracing/setup/ @@ -177,3 +178,6 @@ List of all environment variables available for tracing within the Agent running [5]: /tracing/guide/security/#replace-rules [6]: /tracing/app_analytics/#automatic-configuration [7]: /tracing/guide/setting_primary_tags_to_scope/#environment +[8]: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host +[9]: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables +[10]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 510a436b969a5..dcaa4694328a7 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -28,149 +28,149 @@ spec: | Parameter | Description | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | agent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | -| agent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| agent.apm.enabled | Enable this to enable APM and tracing, on port 8126 ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host | -| agent.apm.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.apm.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | -| agent.apm.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.apm.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.config.checksd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | -| agent.config.collectEvents | nables this to start event collection from the kubernetes API ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/ | -| agent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | -| agent.config.criSocket.criSocketPath | Path to the container runtime socket (if different from Docker) This is supported starting from agent 6.6.0 | -| agent.config.criSocket.dockerSocketPath | Path to the docker runtime socket | -| agent.config.ddUrl | The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in "site". | -| agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging | -| agent.config.dogstatsd.useDogStatsDSocketVolume | Enable dogstatsd over Unix Domain Socket ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ | -| agent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner pods. | +| agent.apm.enabled | Enable this to enable APM and tracing on port 8126. See the [Datadog Docker documentation][1]. | +| agent.apm.env | The Datadog Agent supports many [environment variables][2]. | +| agent.apm.hostPort | Number of the port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.apm.resources.limits | Limits describes the maximum amount of compute resources allowed. For more info, [see the Kubernetes documentation][3]. | +| agent.apm.resources.requests | Requests describes the minimum amount of compute resources required. If requests is omitted for a container, it defaults to `limits` if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. For more info, [see the Kubernetes documentation][3]. | +| agent.config.checksd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory. | +| agent.config.collectEvents | Enables starting event collection from the Kubernetes API. [See the Event Collection documentation][4]. | +| agent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory. | +| agent.config.criSocket.criSocketPath | Path to the container runtime socket (if different from Docker). This is supported starting from Agent 6.6.0. | +| agent.config.criSocket.dockerSocketPath | Path to the Docker runtime socket. | +| agent.config.ddUrl | The host of the Datadog intake server to send Agent data to. Only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in `site`. | +| agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging. See the [Unix Socket origin detection documentation][5]. | +| agent.config.dogstatsd.useDogStatsDSocketVolume | Enable DogStatsD over a Unix Domain Socket. [See the Unix Socket documentation][6]. | +| agent.config.env | The Datadog Agent supports many environment variables. [See the Docker environment variables documentation][2]. | | agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | | agent.config.leaderElection | Enables leader election mechanism for event collection. | -| agent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. : | -| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. : | -| agent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.config.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| agent.config.securityContext.capabilities.add | Added capabilities | -| agent.config.securityContext.capabilities.drop | Removed capabilities | +| agent.config.logLevel | Set logging verbosity. Valid log levels are: trace, debug, info, warn, error, critical, and off. | +| agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. `: ` | +| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. `: ` | +| agent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. [See the Kubernetes documentation][3]. | +| agent.config.resources.requests | Requests describes the minimum amount of compute resources required. If requests is omitted for a container, it defaults to limits if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. [See the Kubernetes documentation][3]. | +| agent.config.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is both run as Privileged, and has CAP_SYS_ADMIN. | +| agent.config.securityContext.capabilities.add | Added capabilities. | +| agent.config.securityContext.capabilities.drop | Removed capabilities. | | agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| agent.config.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | -| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for read-only paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root file system. Default is false. | +| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | | agent.config.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | | agent.config.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | | agent.config.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | | agent.config.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | -| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.config.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| agent.config.tags | List of tags to attach to every metric, event and service check collected by this Agent. Learn more about tagging: https://docs.datadoghq.com/tagging/ | +| agent.config.tags | List of tags to attach to every metric, event, and service check collected by this Agent. See the [Tagging documentation][8] | | agent.config.tolerations | If specified, the Agent pod's tolerations. | -| agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container | -| agent.config.volumes | Specify additional volumes in the Datadog Agent container | -| agent.customConfig.configData | ConfigData corresponds to the configuration file content | -| agent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| agent.customConfig.configMap.name | Name the ConfigMap name | -| agent.daemonsetName | Name of the Daemonset to create or migrate from | +| agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container. | +| agent.config.volumes | Specify additional volumes in the Datadog Agent container. | +| agent.customConfig.configData | ConfigData corresponds to the configuration file content. | +| agent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content. | +| agent.customConfig.configMap.name | Name the ConfigMap. | +| agent.daemonsetName | Name of the DaemonSet to create or migrate from. | | agent.deploymentStrategy.canary.duration | | | agent.deploymentStrategy.canary.paused | | | agent.deploymentStrategy.canary.replicas | | -| agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet | +| agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet. | | agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maxium number of pods created in parallel. Default value is 250. | -| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | MaxPodSchedulerFailure the maxinum number of not scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute | +| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | MaxPodSchedulerFailure is the maxinum number of pods scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute. | | agent.deploymentStrategy.rollingUpdate.maxUnavailable | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | -| agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | SlowStartAdditiveIncrease Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | -| agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | SlowStartIntervalDuration the duration between to 2 Default value is 1min. | -| agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet | -| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. | -| agent.dnsConfig.options | A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. | -| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. | -| agent.dnsPolicy | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | -| agent.env | Environment variables for all Datadog Agents Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | +| agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | +| agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | The duration interval. Default value is 1min. | +| agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet. | +| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This are appended to the base nameservers generated from DNSPolicy. Duplicated nameservers are removed. | +| agent.dnsConfig.options | A list of DNS resolver options. This are merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options override those that appear in the base DNSPolicy. | +| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This are appended to the base search paths generated from DNSPolicy. Duplicated search paths are removed. | +| agent.dnsPolicy | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default', or 'None'. DNS parameters given in DNSConfig are merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | +| agent.env | Environment variables for all Datadog Agents. [See the Docker environment variables documentation][2]. | | agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | | agent.hostPID | Use the host's pid namespace. Optional: Default to false. | -| agent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| agent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| agent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| agent.log.containerCollectUsingFiles | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default: true | -| agent.log.containerLogsPath | This to allow log collection from container log path. Set to a different path if not using docker runtime. ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest Default to /var/lib/docker/containers | -| agent.log.enabled | Enables this to activate Datadog Agent log collection. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup | -| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup Default to 100 | -| agent.log.podLogsPath | This to allow log collection from pod log path. Default to /var/log/pods | -| agent.log.tempStoragePath | This path (always mounted from the host) is used by Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows to start tailing the log files from the right offset Default to /var/lib/datadog-agent/logs | +| agent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6. Use "datadog/dogstatsd:latest" for standalone Datadog Agent DogStatsD. Use "datadog/cluster-agent:latest" for Datadog Cluster Agent. | +| agent.image.pullPolicy | The Kubernetes pull policy. Use Always, Never, or IfNotPresent. | +| agent.image.pullSecrets | It is possible to specify Docker registry credentials. [See the Kubernetes documentation][9]. | +| agent.log.containerCollectUsingFiles | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. See the [Log Collection][10] documentation. Default: true. | +| agent.log.containerLogsPath | This to allow log collection from container log path. Set to a different path if not using docker runtime. See the [Kubernetes documentation][11]. Defaults to /var/lib/docker/containers. | +| agent.log.enabled | Enable this to activate Datadog Agent log collection. See the [Log Collection][10] documentation. | +| agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. See the [Log Collection][10] documentation. | +| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. See the [Log Collection][10] documentation. Defaults to 100. | +| agent.log.podLogsPath | Set this to allow log collection from pod log path. Defaults to /var/log/pods. | +| agent.log.tempStoragePath | This path (always mounted from the host) is used by the Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows you to start tailing the log files from the right offset. Defaults to /var/lib/datadog-agent/logs. | | agent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| agent.process.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| agent.process.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.process.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.process.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.rbac.create | Used to configure RBAC resources creation | -| agent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | -| agent.systemProbe.appArmorProfileName | AppArmorProfileName specify a apparmor profile | -| agent.systemProbe.bpfDebugEnabled | BPFDebugEnabled logging for kernel debug | -| agent.systemProbe.conntrackEnabled | ConntrackEnabled enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data Ref: http://conntrack-tools.netfilter.org/ | -| agent.systemProbe.debugPort | DebugPort Specify the port to expose pprof and expvar for system-probe agent | -| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset | -| agent.systemProbe.env | The Datadog SystemProbe supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| agent.systemProbe.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.systemProbe.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| agent.systemProbe.secCompCustomProfileConfigMap | SecCompCustomProfileConfigMap specify a pre-existing ConfigMap containing a custom SecComp profile | -| agent.systemProbe.secCompProfileName | SecCompProfileName specify a seccomp profile | -| agent.systemProbe.secCompRootPath | SecCompRootPath specify the seccomp profile root directory | -| agent.systemProbe.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN | -| agent.systemProbe.securityContext.capabilities.add | Added capabilities | -| agent.systemProbe.securityContext.capabilities.drop | Removed capabilities | +| agent.process.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. [See the Process documentation][12]. | +| agent.process.env | The Datadog Agent supports many [environment variables][3]. | +| agent.process.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| agent.process.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| agent.rbac.create | Used to configure RBAC resources creation. | +| agent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true. | +| agent.systemProbe.appArmorProfileName | Specify a AppArmor profile. | +| agent.systemProbe.bpfDebugEnabled | Logging for kernel debug. | +| agent.systemProbe.conntrackEnabled | Enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data. [See the Conntrack documentation][13]. | +| agent.systemProbe.debugPort | Specify the port to expose pprof and expvar for system-probe agent. | +| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. [See the Process documentation][12]. | +| agent.systemProbe.env | The Datadog SystemProbe supports many [environment variables][2]. | +| agent.systemProbe.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| agent.systemProbe.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| agent.systemProbe.secCompCustomProfileConfigMap | Specify a pre-existing ConfigMap containing a custom SecComp profile. | +| agent.systemProbe.secCompProfileName | Specify a seccomp profile. | +| agent.systemProbe.secCompRootPath | Specify the seccomp profile root directory. | +| agent.systemProbe.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN. | +| agent.systemProbe.securityContext.capabilities.add | Added capabilities. | +| agent.systemProbe.securityContext.capabilities.drop | Removed capabilities. | | agent.systemProbe.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| agent.systemProbe.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. | +| agent.systemProbe.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for read-only paths and masked paths. This requires the ProcMountType feature flag to be enabled. | | agent.systemProbe.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | -| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | +| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | | agent.systemProbe.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | | agent.systemProbe.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | | agent.systemProbe.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | | agent.systemProbe.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | -| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.systemProbe.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| agent.useExtendedDaemonset | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. default value is false. | -| clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster-agent Pods. | -| clusterAgent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| agent.useExtendedDaemonset | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. Default value is false. | +| clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that are added to the Cluster Agent Pods. | +| clusterAgent.additionalLabels | AdditionalLabels provide labels that are added to the cluster checks runner Pods. | +| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights. That is, for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | | clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterAgent.config.admissionController.enabled | Enable the admission controller to be able to inject APM/Dogstatsd config and standard tags (env, service, version) automatically into your pods | -| clusterAgent.config.admissionController.mutateUnlabelled | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | -| clusterAgent.config.admissionController.serviceName | ServiceName corresponds to the webhook service name | -| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset ref: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ https://docs.datadoghq.com/agent/cluster_agent/endpointschecks/ Autodiscovery via Kube Service annotations is automatically enabled | -| clusterAgent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory | -| clusterAgent.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| clusterAgent.config.externalMetrics.enabled | Enable the metricsProvider to be able to scale based on metrics in Datadog | -| clusterAgent.config.externalMetrics.port | If specified configures the metricsProvider external metrics service port | -| clusterAgent.config.externalMetrics.useDatadogMetrics | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries) | -| clusterAgent.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| clusterAgent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterAgent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterAgent.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Agent container | -| clusterAgent.config.volumes | Specify additional volumes in the Datadog Cluster Agent container | -| clusterAgent.customConfig.configData | ConfigData corresponds to the configuration file content | -| clusterAgent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| clusterAgent.customConfig.configMap.name | Name the ConfigMap name | -| clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from | +| clusterAgent.config.admissionController.enabled | Enable the admission controller to be able to inject APM/DogStatsD config and standard tags (env, service, version) automatically into your pods. | +| clusterAgent.config.admissionController.mutateUnlabelled | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | +| clusterAgent.config.admissionController.serviceName | ServiceName corresponds to the webhook service name. | +| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset. See the [Cluster Checks][14] documentation. Autodiscovery through Kube Service annotations is automatically enabled. | +| clusterAgent.config.confd.configMapName | Name of a ConfigMap used to mount a directory. | +| clusterAgent.config.env | The Datadog Agent supports many [environment variables][2]. | +| clusterAgent.config.externalMetrics.enabled | Enable the metricsProvider to be able to scale based on metrics in Datadog. | +| clusterAgent.config.externalMetrics.port | If specified, configures the metricsProvider external metrics service port. | +| clusterAgent.config.externalMetrics.useDatadogMetrics | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries). | +| clusterAgent.config.logLevel | Set logging verbosity. Valid log levels are: trace, debug, info, warn, error, critical, and off. | +| clusterAgent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| clusterAgent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| clusterAgent.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Agent container. | +| clusterAgent.config.volumes | Specify additional volumes in the Datadog Cluster Agent container. | +| clusterAgent.customConfig.configData | ConfigData corresponds to the configuration file content. | +| clusterAgent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content. | +| clusterAgent.customConfig.configMap.name | Name the ConfigMap. | +| clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from. | | clusterAgent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | | clusterAgent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| clusterAgent.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| clusterAgent.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| clusterAgent.image.pullSecrets | It is possible to specify docker registry credentials. See the [Kubernetes documentation][9]. | +| clusterAgent.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | | clusterAgent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| clusterAgent.rbac.create | Used to configure RBAC resources creation | -| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | -| clusterAgent.replicas | Number of the Cluster Agent replicas | -| clusterAgent.tolerations | If specified, the Cluster-Agent pod's tolerations. | +| clusterAgent.rbac.create | Used to configure RBAC resources creation. | +| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true. | +| clusterAgent.replicas | Number of the Cluster Agent replicas. | +| clusterAgent.tolerations | If specified, the Cluster Agent pod's tolerations. | | clusterChecksRunner.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | | clusterChecksRunner.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | | clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | @@ -179,36 +179,52 @@ spec: | clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | | clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterChecksRunner.config.env | The Datadog Agent supports many environment variables Ref: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables | -| clusterChecksRunner.config.logLevel | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off | -| clusterChecksRunner.config.resources.limits | Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterChecksRunner.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ | -| clusterChecksRunner.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Check Runner container | -| clusterChecksRunner.config.volumes | Specify additional volumes in the Datadog Cluster Check Runner container | -| clusterChecksRunner.customConfig.configData | ConfigData corresponds to the configuration file content | -| clusterChecksRunner.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content | -| clusterChecksRunner.customConfig.configMap.name | Name the ConfigMap name | -| clusterChecksRunner.deploymentName | Name of the cluster checks deployment to create or migrate from | -| clusterChecksRunner.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| clusterChecksRunner.image.pullSecrets | It is possible to specify docker registry credentials See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod | -| clusterChecksRunner.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | +| clusterChecksRunner.config.env | The Datadog Agent supports many [environment variables][2]. | +| clusterChecksRunner.config.logLevel | Set logging verbosity. Valid log levels are: trace, debug, info, warn, error, critical, and off. | +| clusterChecksRunner.config.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| clusterChecksRunner.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| clusterChecksRunner.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Check Runner container. | +| clusterChecksRunner.config.volumes | Specify additional volumes in the Datadog Cluster Check Runner container. | +| clusterChecksRunner.customConfig.configData | ConfigData corresponds to the configuration file content. | +| clusterChecksRunner.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content. | +| clusterChecksRunner.customConfig.configMap.name | Name the ConfigMap. | +| clusterChecksRunner.deploymentName | Name of the cluster checks deployment to create or migrate from. | +| clusterChecksRunner.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6. Use "datadog/dogstatsd:latest" for standalone Datadog Agent DogStatsD. Use "datadog/cluster-agent:latest" for Datadog Cluster Agent. | +| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy. Use Always, Never, or IfNotPresent. | +| clusterChecksRunner.image.pullSecrets | It is possible to specify docker registry credentials. See the [Kubernetes documentation][9]. | +| clusterChecksRunner.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | | clusterChecksRunner.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| clusterChecksRunner.rbac.create | Used to configure RBAC resources creation | -| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true | -| clusterChecksRunner.replicas | Number of the Cluster Agent replicas | +| clusterChecksRunner.rbac.create | Used to configure RBAC resources creation. | +| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true. | +| clusterChecksRunner.replicas | Number of the Cluster Agent replicas. | | clusterChecksRunner.tolerations | If specified, the Cluster-Checks pod's tolerations. | | clusterName | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | -| credentials.apiKey | APIKey Set this to your Datadog API key before the Agent runs. ref: https://app.datadoghq.com/account/settings#agent/kubernetes | -| credentials.apiKeyExistingSecret | APIKeyExistingSecret is DEPRECATED. In order to pass the API key through an existing secret, please consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | -| credentials.apiSecret.keyName | KeyName is the key of the secret to use | -| credentials.apiSecret.secretName | SecretName is the name of the secret | +| credentials.apiKey | APIKey Set this to your Datadog API key before the Agent runs. | +| credentials.apiKeyExistingSecret | APIKeyExistingSecret is DEPRECATED. To pass the API key through an existing secret, consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | +| credentials.apiSecret.keyName | KeyName is the key of the secret to use. | +| credentials.apiSecret.secretName | SecretName is the name of the secret. | | credentials.appKey | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | -| credentials.appKeyExistingSecret | AppKeyExistingSecret is DEPRECATED. In order to pass the APP key through an existing secret, please consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | -| credentials.appSecret.keyName | KeyName is the key of the secret to use | -| credentials.appSecret.secretName | SecretName is the name of the secret | -| credentials.token | This needs to be at least 32 characters a-zA-z It is a preshared key between the node agents and the cluster agent | -| credentials.useSecretBackend | UseSecretBackend use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster-Checks. If useSecretBackend: true, other credential parameters will be ignored. default value is false. | +| credentials.appKeyExistingSecret | AppKeyExistingSecret is DEPRECATED. To pass the APP key through an existing secret, consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | +| credentials.appSecret.keyName | KeyName is the key of the secret to use. | +| credentials.appSecret.secretName | SecretName is the name of the secret. | +| credentials.token | This needs to be at least 32 characters a-zA-z. It is a preshared key between the Node Agents and the Cluster Agent. | +| credentials.useSecretBackend | Use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster Checks. If useSecretBackend: true, other credential parameters will be ignored. Default value is false. | | site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | -{{< /table >}} \ No newline at end of file +{{< /table >}} + +[1]: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host +[2]: https://docs.datadoghq.com/agent/docker/?tab=standard#environment-variables +[3]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ +[4]: https://docs.datadoghq.com/agent/kubernetes/event_collection/ +[5]: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging +[6]: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/ +[7]: https://github.com/kubernetes-sigs/windows-gmsa +[8]: https://docs.datadoghq.com/tagging/ +[9]: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod +[10]: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup +[11]: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest +[12]: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset +[13]: http://conntrack-tools.netfilter.org/ +[14]: https://docs.datadoghq.com/agent/cluster_agent/clusterchecks/ +[15]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ From e9e29c6bad9593a3ae8f9e761f4016d7c6e59cff Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Thu, 1 Oct 2020 14:21:15 -0700 Subject: [PATCH 24/26] more editing --- content/en/agent/kubernetes/apm.md | 6 +- .../kubernetes/operator_configuration.md | 228 +++++++++--------- 2 files changed, 117 insertions(+), 117 deletions(-) diff --git a/content/en/agent/kubernetes/apm.md b/content/en/agent/kubernetes/apm.md index 4a97bd0c0dc87..438608db40c5b 100644 --- a/content/en/agent/kubernetes/apm.md +++ b/content/en/agent/kubernetes/apm.md @@ -160,10 +160,10 @@ List of all environment variables available for tracing within the Agent running | Environment variable | Description | | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `agent.apm.enabled` | Enable this to enable APM and tracing, on port 8126. See the [Datadog Docker documentation][8]. | -| `agent.apm.env` | The Datadog Agent supports many environment variables. See the [Docker environment variables documentation][9]. | -| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| `agent.apm.env` | The Datadog Agent supports many [environment variables][9]. | +| `agent.apm.hostPort` | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If `HostNetwork` is specified, this must match `ContainerPort`. Most containers do not need this. | | `agent.apm.resources.limits` | Limits describes the maximum amount of compute resources allowed. For more info, see the [Kubernetes documentation][10]. | -| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. For more info, see the [Kubernetes documentation][10]. | | +| `agent.apm.resources.requests` | Requests describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified, otherwise to an implementation-defined value. For more info, see the [Kubernetes documentation][10]. | | ## Further Reading diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index dcaa4694328a7..319a460fb5cef 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -27,51 +27,51 @@ spec: {{< table table-type="break-word" >}} | Parameter | Description | |--------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| agent.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the Agent Pods. | -| agent.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner pods. | +| agent.additionalAnnotations | `AdditionalAnnotations` provide annotations that will be added to the Agent Pods. | +| agent.additionalLabels | `AdditionalLabels` provide labels that will be added to the cluster checks runner pods. | | agent.apm.enabled | Enable this to enable APM and tracing on port 8126. See the [Datadog Docker documentation][1]. | | agent.apm.env | The Datadog Agent supports many [environment variables][2]. | -| agent.apm.hostPort | Number of the port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.apm.hostPort | Number of the port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If `HostNetwork` is specified, this must match `ContainerPort`. Most containers do not need this. | | agent.apm.resources.limits | Limits describes the maximum amount of compute resources allowed. For more info, [see the Kubernetes documentation][3]. | -| agent.apm.resources.requests | Requests describes the minimum amount of compute resources required. If requests is omitted for a container, it defaults to `limits` if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. For more info, [see the Kubernetes documentation][3]. | -| agent.config.checksd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory. | +| agent.apm.resources.requests | `Requests` describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. For more info, [see the Kubernetes documentation][3]. | +| agent.config.checksd.configMapName | Name of a ConfigMap used to mount a directory. | | agent.config.collectEvents | Enables starting event collection from the Kubernetes API. [See the Event Collection documentation][4]. | -| agent.config.confd.configMapName | ConfigMapName name of a ConfigMap used to mount a directory. | +| agent.config.confd.configMapName | Name of a ConfigMap used to mount a directory. | | agent.config.criSocket.criSocketPath | Path to the container runtime socket (if different from Docker). This is supported starting from Agent 6.6.0. | | agent.config.criSocket.dockerSocketPath | Path to the Docker runtime socket. | | agent.config.ddUrl | The host of the Datadog intake server to send Agent data to. Only set this option if you need the Agent to send data to a custom URL. Overrides the site setting defined in `site`. | | agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging. See the [Unix Socket origin detection documentation][5]. | | agent.config.dogstatsd.useDogStatsDSocketVolume | Enable DogStatsD over a Unix Domain Socket. [See the Unix Socket documentation][6]. | -| agent.config.env | The Datadog Agent supports many environment variables. [See the Docker environment variables documentation][2]. | -| agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. | +| agent.config.env | The Datadog Agent supports many [environment variables][2]. | +| agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If `HostNetwork` is specified, this must match `ContainerPort`. Most containers do not need this. | | agent.config.leaderElection | Enables leader election mechanism for event collection. | -| agent.config.logLevel | Set logging verbosity. Valid log levels are: trace, debug, info, warn, error, critical, and off. | +| agent.config.logLevel | Set logging verbosity. Valid log levels are: `trace`, `debug`, `info`, `warn`, `error`, `critical`, and `off`. | | agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. `: ` | | agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. `: ` | -| agent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. [See the Kubernetes documentation][3]. | -| agent.config.resources.requests | Requests describes the minimum amount of compute resources required. If requests is omitted for a container, it defaults to limits if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. [See the Kubernetes documentation][3]. | -| agent.config.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is both run as Privileged, and has CAP_SYS_ADMIN. | +| agent.config.resources.limits | Describes the maximum amount of compute resources allowed. [See the Kubernetes documentation][3]. | +| agent.config.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. [See the Kubernetes documentation][3]. | +| agent.config.securityContext.allowPrivilegeEscalation | Controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the `no_new_privs` flag will be set on the container process. `AllowPrivilegeEscalation` is true always when the container is both run as `Privileged`, and has `CAP_SYS_ADMIN`. | | agent.config.securityContext.capabilities.add | Added capabilities. | | agent.config.securityContext.capabilities.drop | Removed capabilities. | -| agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| agent.config.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for read-only paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root file system. Default is false. | -| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.config.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | -| agent.config.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | -| agent.config.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | -| agent.config.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | -| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.config.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | +| agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to `false`. | +| agent.config.securityContext.procMount | `procMount` denotes the type of proc mount to use for the containers. The default is `DefaultProcMount` which uses the container runtime defaults for read-only paths and masked paths. This requires the `ProcMountType` feature flag to be enabled. | +| agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root file system. Default is `false`. | +| agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | +| agent.config.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet validates the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | +| agent.config.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | +| agent.config.securityContext.seLinuxOptions.level | SELinux level label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.role | SELinux role label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.type | SELinux type label that applies to the container. | +| agent.config.securityContext.seLinuxOptions.user | SELinux user label that applies to the container. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpec | `GMSACredentialSpec` is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the `GMSACredentialSpecName` field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | `GMSACredentialSpecName` is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.config.securityContext.windowsOptions.runAsUserName | The `UserName` in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. This field is beta-level and may be disabled with the `WindowsRunAsUserName` feature flag. | | agent.config.tags | List of tags to attach to every metric, event, and service check collected by this Agent. See the [Tagging documentation][8] | | agent.config.tolerations | If specified, the Agent pod's tolerations. | | agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container. | | agent.config.volumes | Specify additional volumes in the Datadog Agent container. | -| agent.customConfig.configData | ConfigData corresponds to the configuration file content. | -| agent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content. | +| agent.customConfig.configData | Corresponds to the configuration file content. | +| agent.customConfig.configMap.fileKey | Corresponds to the key used in the ConfigMap.Data to store the configuration file content. | | agent.customConfig.configMap.name | Name the ConfigMap. | | agent.daemonsetName | Name of the DaemonSet to create or migrate from. | | agent.deploymentStrategy.canary.duration | | @@ -79,137 +79,137 @@ spec: | agent.deploymentStrategy.canary.replicas | | | agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet. | | agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maxium number of pods created in parallel. Default value is 250. | -| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | MaxPodSchedulerFailure is the maxinum number of pods scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute. | +| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | `maxPodSchedulerFailure` is the maxinum number of pods scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute. | | agent.deploymentStrategy.rollingUpdate.maxUnavailable | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | | agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | | agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | The duration interval. Default value is 1min. | | agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet. | -| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This are appended to the base nameservers generated from DNSPolicy. Duplicated nameservers are removed. | -| agent.dnsConfig.options | A list of DNS resolver options. This are merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options override those that appear in the base DNSPolicy. | -| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This are appended to the base search paths generated from DNSPolicy. Duplicated search paths are removed. | -| agent.dnsPolicy | Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default', or 'None'. DNS parameters given in DNSConfig are merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. | +| agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This are appended to the base nameservers generated from `dnsPolicy`. Duplicated nameservers are removed. | +| agent.dnsConfig.options | A list of DNS resolver options. This are merged with the base options generated from `dnsPolicy`. Duplicated entries will be removed. Resolution options given in `options` override those that appear in the base `dnsPolicy`. | +| agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This are appended to the base search paths generated from `dnsPolicy`. Duplicated search paths are removed. | +| agent.dnsPolicy | Set DNS policy for the pod. Defaults to `ClusterFirst`. Valid values are `ClusterFirstWithHostNet`, `ClusterFirst`, `Default`, or `None`. DNS parameters given in `dnsConfig` are merged with the policy selected with `dnsPolicy`. To have DNS options set along with `hostNetwork`, you have to specify `dnsPolicy` explicitly to `ClusterFirstWithHostNet`. | | agent.env | Environment variables for all Datadog Agents. [See the Docker environment variables documentation][2]. | -| agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. | -| agent.hostPID | Use the host's pid namespace. Optional: Default to false. | -| agent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6. Use "datadog/dogstatsd:latest" for standalone Datadog Agent DogStatsD. Use "datadog/cluster-agent:latest" for Datadog Cluster Agent. | -| agent.image.pullPolicy | The Kubernetes pull policy. Use Always, Never, or IfNotPresent. | +| agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Defaults to `false`. | +| agent.hostPID | Use the host's PID namespace. Optional: Defaults to `false`. | +| agent.image.name | Define the image to use Use `datadog/agent:latest` for Datadog Agent 6. Use `datadog/dogstatsd:latest` for stand-alone Datadog Agent DogStatsD. Use `datadog/cluster-agent:latest` for Datadog Cluster Agent. | +| agent.image.pullPolicy | The Kubernetes pull policy. Use `Always`, `Never`, or `IfNotPresent`. | | agent.image.pullSecrets | It is possible to specify Docker registry credentials. [See the Kubernetes documentation][9]. | -| agent.log.containerCollectUsingFiles | Collect logs from files in /var/log/pods instead of using container runtime API. It's usually the most efficient way of collecting logs. See the [Log Collection][10] documentation. Default: true. | -| agent.log.containerLogsPath | This to allow log collection from container log path. Set to a different path if not using docker runtime. See the [Kubernetes documentation][11]. Defaults to /var/lib/docker/containers. | +| agent.log.containerCollectUsingFiles | Collect logs from files in `/var/log/pods` instead of using container runtime API. This is usually the most efficient way of collecting logs. See the [Log Collection][10] documentation. Default: `true`. | +| agent.log.containerLogsPath | Allow log collection from container log path. Set to a different path if not using docker runtime. See the [Kubernetes documentation][11]. Defaults to `/var/lib/docker/containers`. | | agent.log.enabled | Enable this to activate Datadog Agent log collection. See the [Log Collection][10] documentation. | | agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. See the [Log Collection][10] documentation. | -| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent will tail up to. Increasing this limit can increase resource consumption of the Agent. See the [Log Collection][10] documentation. Defaults to 100. | -| agent.log.podLogsPath | Set this to allow log collection from pod log path. Defaults to /var/log/pods. | -| agent.log.tempStoragePath | This path (always mounted from the host) is used by the Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows you to start tailing the log files from the right offset. Defaults to /var/lib/datadog-agent/logs. | -| agent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | -| agent.process.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. [See the Process documentation][12]. | +| agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent tails up to. Increasing this limit can increase resource consumption of the Agent. See the [Log Collection][10] documentation. Defaults to 100. | +| agent.log.podLogsPath | Set this to allow log collection from pod log path. Defaults to `/var/log/pods`. | +| agent.log.tempStoragePath | This path (always mounted from the host) is used by the Datadog Agent to store information about processed log files. If the Datadog Agent is restarted, it allows you to start tailing the log files from the right offset. Defaults to `/var/lib/datadog-agent/logs`. | +| agent.priorityClassName | If specified, indicates the pod's priority. `system-node-critical` and `system-cluster-critical` are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a `PriorityClass` object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| agent.process.enabled | Enable this to activate live process monitoring. Note: `/etc/passwd` is automatically mounted to allow username resolution. [See the Process documentation][12]. | | agent.process.env | The Datadog Agent supports many [environment variables][3]. | -| agent.process.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | -| agent.process.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| agent.process.resources.limits | Describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| agent.process.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | | agent.rbac.create | Used to configure RBAC resources creation. | -| agent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true. | +| agent.rbac.serviceAccountName | Used to set up the service account name to use `Ignored` if the field `Create` is true. | | agent.systemProbe.appArmorProfileName | Specify a AppArmor profile. | | agent.systemProbe.bpfDebugEnabled | Logging for kernel debug. | | agent.systemProbe.conntrackEnabled | Enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data. [See the Conntrack documentation][13]. | | agent.systemProbe.debugPort | Specify the port to expose pprof and expvar for system-probe agent. | -| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: /etc/passwd is automatically mounted to allow username resolution. [See the Process documentation][12]. | +| agent.systemProbe.enabled | Enable this to activate live process monitoring. Note: `/etc/passwd` is automatically mounted to allow username resolution. [See the Process documentation][12]. | | agent.systemProbe.env | The Datadog SystemProbe supports many [environment variables][2]. | -| agent.systemProbe.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | -| agent.systemProbe.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| agent.systemProbe.resources.limits | Describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| agent.systemProbe.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | | agent.systemProbe.secCompCustomProfileConfigMap | Specify a pre-existing ConfigMap containing a custom SecComp profile. | | agent.systemProbe.secCompProfileName | Specify a seccomp profile. | | agent.systemProbe.secCompRootPath | Specify the seccomp profile root directory. | -| agent.systemProbe.securityContext.allowPrivilegeEscalation | AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN. | +| agent.systemProbe.securityContext.allowPrivilegeEscalation | Controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the `no_new_privs` flag will be set on the container process. `AllowPrivilegeEscalation` is true always when the container is: 1) run as `Privileged` 2) has `CAP_SYS_ADMIN`. | | agent.systemProbe.securityContext.capabilities.add | Added capabilities. | | agent.systemProbe.securityContext.capabilities.drop | Removed capabilities. | | agent.systemProbe.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. | -| agent.systemProbe.securityContext.procMount | procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for read-only paths and masked paths. This requires the ProcMountType feature flag to be enabled. | -| agent.systemProbe.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is false. | -| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. | -| agent.systemProbe.securityContext.seLinuxOptions.level | Level is SELinux level label that applies to the container. | -| agent.systemProbe.securityContext.seLinuxOptions.role | Role is a SELinux role label that applies to the container. | -| agent.systemProbe.securityContext.seLinuxOptions.type | Type is a SELinux type label that applies to the container. | -| agent.systemProbe.securityContext.seLinuxOptions.user | User is a SELinux user label that applies to the container. | -| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | GMSACredentialSpec is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.systemProbe.securityContext.windowsOptions.runAsUserName | The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag. | -| agent.useExtendedDaemonset | UseExtendedDaemonset use ExtendedDaemonset for Agent deployment. Default value is false. | -| clusterAgent.additionalAnnotations | AdditionalAnnotations provide annotations that are added to the Cluster Agent Pods. | -| clusterAgent.additionalLabels | AdditionalLabels provide labels that are added to the cluster checks runner Pods. | -| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights. That is, for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | -| clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| agent.systemProbe.securityContext.procMount | Denotes the type of proc mount to use for the containers. The default is `DefaultProcMount` which uses the container runtime defaults for read-only paths and masked paths. This requires the `ProcMountType` feature flag to be enabled. | +| agent.systemProbe.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root filesystem. Default is `false`. | +| agent.systemProbe.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | +| agent.systemProbe.securityContext.runAsNonRoot | Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | +| agent.systemProbe.securityContext.runAsUser | The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | +| agent.systemProbe.securityContext.seLinuxOptions.level | SELinux level label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.role | SELinux role label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.type | SELinux type label that applies to the container. | +| agent.systemProbe.securityContext.seLinuxOptions.user | SELinux user label that applies to the container. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | `GMSACredentialSpec` is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the `GMSACredentialSpecName` field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | `GMSACredentialSpecName` is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | +| agent.systemProbe.securityContext.windowsOptions.runAsUserName | The `UserName` in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. This field is beta-level and may be disabled with the `WindowsRunAsUserName` feature flag. | +| agent.useExtendedDaemonset | Use ExtendedDaemonset for Agent deployment. Default value is false. | +| clusterAgent.additionalAnnotations | `AdditionalAnnotations` provide annotations that are added to the Cluster Agent Pods. | +| clusterAgent.additionalLabels | `AdditionalLabels` provide labels that are added to the cluster checks runner Pods. | +| clusterAgent.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights. That is, for each node that meets all of the scheduling requirements (resource request, `requiredDuringScheduling` affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding `matchExpressions`; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are `OR`ed. | +| clusterAgent.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, `requiredDuringScheduling` affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding `podAffinityTerm`; the node(s) with the highest sum are the most preferred. | +| clusterAgent.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each `podAffinityTerm` are intersected, i.e. all terms must be satisfied. | +| clusterAgent.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, `requiredDuringScheduling` anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding `podAffinityTerm`; the node(s) with the highest sum are the most preferred. | | clusterAgent.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterAgent.config.admissionController.enabled | Enable the admission controller to be able to inject APM/DogStatsD config and standard tags (env, service, version) automatically into your pods. | -| clusterAgent.config.admissionController.mutateUnlabelled | MutateUnlabelled enables injecting config without having the pod label 'admission.datadoghq.com/enabled="true"' | -| clusterAgent.config.admissionController.serviceName | ServiceName corresponds to the webhook service name. | -| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the cluster-agents and the daemonset. See the [Cluster Checks][14] documentation. Autodiscovery through Kube Service annotations is automatically enabled. | +| clusterAgent.config.admissionController.mutateUnlabelled | Enables injecting config without having the pod label `admission.datadoghq.com/enabled="true"` | +| clusterAgent.config.admissionController.serviceName | Corresponds to the webhook service name. | +| clusterAgent.config.clusterChecksEnabled | Enable the Cluster Checks and Endpoint Checks feature on both the Cluster Agent and the DaemonSet. See the [Cluster Checks][14] documentation. Autodiscovery through Kube Service annotations is automatically enabled. | | clusterAgent.config.confd.configMapName | Name of a ConfigMap used to mount a directory. | | clusterAgent.config.env | The Datadog Agent supports many [environment variables][2]. | -| clusterAgent.config.externalMetrics.enabled | Enable the metricsProvider to be able to scale based on metrics in Datadog. | -| clusterAgent.config.externalMetrics.port | If specified, configures the metricsProvider external metrics service port. | +| clusterAgent.config.externalMetrics.enabled | Enable the `metricsProvider` to be able to scale based on metrics in Datadog. | +| clusterAgent.config.externalMetrics.port | If specified, configures the `metricsProvider` external metrics service port. | | clusterAgent.config.externalMetrics.useDatadogMetrics | Enable usage of DatadogMetrics CRD (allow to scale on arbitrary queries). | -| clusterAgent.config.logLevel | Set logging verbosity. Valid log levels are: trace, debug, info, warn, error, critical, and off. | -| clusterAgent.config.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | -| clusterAgent.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| clusterAgent.config.logLevel | Set logging verbosity. Valid log levels are: `trace`, `debug`, `info`, `warn`, `error`, `critical`, and `off`. | +| clusterAgent.config.resources.limits | Describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | +| clusterAgent.config.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | | clusterAgent.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Agent container. | | clusterAgent.config.volumes | Specify additional volumes in the Datadog Cluster Agent container. | -| clusterAgent.customConfig.configData | ConfigData corresponds to the configuration file content. | -| clusterAgent.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content. | +| clusterAgent.customConfig.configData | Corresponds to the configuration file content. | +| clusterAgent.customConfig.configMap.fileKey | Corresponds to the key used in the `ConfigMap.Data` to store the configuration file content. | | clusterAgent.customConfig.configMap.name | Name the ConfigMap. | | clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from. | -| clusterAgent.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6 Use "datadog/dogstatsd:latest" for Standalone Datadog Agent DogStatsD6 Use "datadog/cluster-agent:latest" for Datadog Cluster Agent | -| clusterAgent.image.pullPolicy | The Kubernetes pull policy Use Always, Never or IfNotPresent | -| clusterAgent.image.pullSecrets | It is possible to specify docker registry credentials. See the [Kubernetes documentation][9]. | -| clusterAgent.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | -| clusterAgent.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| clusterAgent.image.name | Define the image to use. Use `datadog/agent:latest` for Datadog Agent 6. Use `datadog/dogstatsd:latest` for stand-alone Datadog Agent DogStatsD. Use `datadog/cluster-agent:latest` for Datadog Cluster Agent. | +| clusterAgent.image.pullPolicy | The Kubernetes pull policy. Use `Always`, `Never`, or `IfNotPresent`. | +| clusterAgent.image.pullSecrets | It is possible to specify Docker registry credentials. See the [Kubernetes documentation][9]. | +| clusterAgent.nodeSelector | Selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | +| clusterAgent.priorityClassName | If specified, indicates the pod's priority. `system-node-critical` and `system-cluster-critical` are two special keywords that indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a `PriorityClass` object with that name. If not specified, the pod priority will be default or zero if there is no default. | | clusterAgent.rbac.create | Used to configure RBAC resources creation. | -| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true. | +| clusterAgent.rbac.serviceAccountName | Used to set up the service account name to use. Ignored if the field `Create` is true. | | clusterAgent.replicas | Number of the Cluster Agent replicas. | | clusterAgent.tolerations | If specified, the Cluster Agent pod's tolerations. | -| clusterChecksRunner.additionalAnnotations | AdditionalAnnotations provide annotations that will be added to the cluster checks runner Pods. | -| clusterChecksRunner.additionalLabels | AdditionalLabels provide labels that will be added to the cluster checks runner Pods. | -| clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | -| clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are ORed. | -| clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | -| clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | -| clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.additionalAnnotations | Provide annotations that will be added to the cluster checks runner Pods. | +| clusterChecksRunner.additionalLabels | Provide labels that will be added to the cluster checks runner Pods. | +| clusterChecksRunner.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, `requiredDuringScheduling` affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms | Required. A list of node selector terms. The terms are `OR`ed. | +| clusterChecksRunner.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, `requiredDuringScheduling` affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding `podAffinityTerm`; the node(s) with the highest sum are the most preferred. | +| clusterChecksRunner.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each `podAffinityTerm` are intersected, i.e. all terms must be satisfied. | +| clusterChecksRunner.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution | The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, `requiredDuringScheduling` anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding `podAffinityTerm`; the node(s) with the highest sum are the most preferred. | | clusterChecksRunner.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution | If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. | | clusterChecksRunner.config.env | The Datadog Agent supports many [environment variables][2]. | -| clusterChecksRunner.config.logLevel | Set logging verbosity. Valid log levels are: trace, debug, info, warn, error, critical, and off. | +| clusterChecksRunner.config.logLevel | Set logging verbosity. Valid log levels are: `trace`, `debug`, `info`, `warn`, `error`, `critical`, and `off`. | | clusterChecksRunner.config.resources.limits | Limits describes the maximum amount of compute resources allowed. See the [Kubernetes documentation][3]. | -| clusterChecksRunner.config.resources.requests | Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | +| clusterChecksRunner.config.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | | clusterChecksRunner.config.volumeMounts | Specify additional volume mounts in the Datadog Cluster Check Runner container. | | clusterChecksRunner.config.volumes | Specify additional volumes in the Datadog Cluster Check Runner container. | -| clusterChecksRunner.customConfig.configData | ConfigData corresponds to the configuration file content. | -| clusterChecksRunner.customConfig.configMap.fileKey | FileKey corresponds to the key used in the ConfigMap.Data to store the configuration file content. | +| clusterChecksRunner.customConfig.configData | Corresponds to the configuration file content. | +| clusterChecksRunner.customConfig.configMap.fileKey | Corresponds to the key used in the `ConfigMap.Data` to store the configuration file content. | | clusterChecksRunner.customConfig.configMap.name | Name the ConfigMap. | | clusterChecksRunner.deploymentName | Name of the cluster checks deployment to create or migrate from. | | clusterChecksRunner.image.name | Define the image to use Use "datadog/agent:latest" for Datadog Agent 6. Use "datadog/dogstatsd:latest" for standalone Datadog Agent DogStatsD. Use "datadog/cluster-agent:latest" for Datadog Cluster Agent. | -| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy. Use Always, Never, or IfNotPresent. | +| clusterChecksRunner.image.pullPolicy | The Kubernetes pull policy. Use `Always`, `Never`, or `IfNotPresent`. | | clusterChecksRunner.image.pullSecrets | It is possible to specify docker registry credentials. See the [Kubernetes documentation][9]. | -| clusterChecksRunner.nodeSelector | NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | -| clusterChecksRunner.priorityClassName | If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. | +| clusterChecksRunner.nodeSelector | Selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | +| clusterChecksRunner.priorityClassName | If specified, indicates the pod's priority. `system-node-critical` and `system-cluster-critical` are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a `PriorityClass` object with that name. If not specified, the pod priority will be default or zero if there is no default. | | clusterChecksRunner.rbac.create | Used to configure RBAC resources creation. | -| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use Ignored if the field Create is true. | +| clusterChecksRunner.rbac.serviceAccountName | Used to set up the service account name to use. Ignored if the field `Create` is true. | | clusterChecksRunner.replicas | Number of the Cluster Agent replicas. | -| clusterChecksRunner.tolerations | If specified, the Cluster-Checks pod's tolerations. | -| clusterName | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily | -| credentials.apiKey | APIKey Set this to your Datadog API key before the Agent runs. | -| credentials.apiKeyExistingSecret | APIKeyExistingSecret is DEPRECATED. To pass the API key through an existing secret, consider "apiSecret" instead. If set, this parameter takes precedence over "apiKey". | -| credentials.apiSecret.keyName | KeyName is the key of the secret to use. | -| credentials.apiSecret.secretName | SecretName is the name of the secret. | -| credentials.appKey | If you are using clusterAgent.metricsProvider.enabled = true, you must set a Datadog application key for read access to your metrics. | -| credentials.appKeyExistingSecret | AppKeyExistingSecret is DEPRECATED. To pass the APP key through an existing secret, consider "appSecret" instead. If set, this parameter takes precedence over "appKey". | -| credentials.appSecret.keyName | KeyName is the key of the secret to use. | -| credentials.appSecret.secretName | SecretName is the name of the secret. | -| credentials.token | This needs to be at least 32 characters a-zA-z. It is a preshared key between the Node Agents and the Cluster Agent. | -| credentials.useSecretBackend | Use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster Checks. If useSecretBackend: true, other credential parameters will be ignored. Default value is false. | -| site | The site of the Datadog intake to send Agent data to. Set to 'datadoghq.eu' to send data to the EU site. | +| clusterChecksRunner.tolerations | If specified, the Cluster Check pod's tolerations. | +| clusterName | Set a unique cluster name to allow scoping hosts and Cluster Checks Runner easily. | +| credentials.apiKey | Set this to your Datadog API key before the Agent runs. | +| credentials.apiKeyExistingSecret | DEPRECATED. To pass the API key through an existing secret, consider `apiSecret` instead. If set, this parameter takes precedence over `apiKey`. | +| credentials.apiSecret.keyName | Key of the secret to use. | +| credentials.apiSecret.secretName | Name of the secret. | +| credentials.appKey | If you are using `clusterAgent.metricsProvider.enabled = true`, you must set a Datadog application key for read access to your metrics. | +| credentials.appKeyExistingSecret | DEPRECATED. To pass the app key through an existing secret, consider `appSecret` instead. If set, this parameter takes precedence over `appKey`. | +| credentials.appSecret.keyName | Key of the secret to use. | +| credentials.appSecret.secretName | Name of the secret. | +| credentials.token | A preshared key between the Node Agents and the Cluster Agent. This needs to be at least 32 characters a-zA-z. | +| credentials.useSecretBackend | Use the Agent secret backend feature for retreiving all credentials needed by the different components: Agent, Cluster, Cluster Checks. If `useSecretBackend: true`, other credential parameters will be ignored. Default value is false. | +| site | The site of the Datadog intake to send Agent data to. Set to `datadoghq.eu` to send data to the EU site. | {{< /table >}} From 332dd9a6f7dce5f6eac2fb6d77d26a47e04d768d Mon Sep 17 00:00:00 2001 From: cswatt Date: Thu, 1 Oct 2020 17:21:18 -0700 Subject: [PATCH 25/26] Apply suggestions from code review Co-authored-by: Kaylyn --- .../kubernetes/operator_configuration.md | 26 +++++++++---------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/content/en/agent/kubernetes/operator_configuration.md b/content/en/agent/kubernetes/operator_configuration.md index 319a460fb5cef..1575d236241d0 100644 --- a/content/en/agent/kubernetes/operator_configuration.md +++ b/content/en/agent/kubernetes/operator_configuration.md @@ -43,17 +43,17 @@ spec: | agent.config.dogstatsd.dogstatsdOriginDetection | Enable origin detection for container tagging. See the [Unix Socket origin detection documentation][5]. | | agent.config.dogstatsd.useDogStatsDSocketVolume | Enable DogStatsD over a Unix Domain Socket. [See the Unix Socket documentation][6]. | | agent.config.env | The Datadog Agent supports many [environment variables][2]. | -| agent.config.hostPort | Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If `HostNetwork` is specified, this must match `ContainerPort`. Most containers do not need this. | +| agent.config.hostPort | Number of the port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If `HostNetwork` is specified, this must match `ContainerPort`. Most containers do not need this. | | agent.config.leaderElection | Enables leader election mechanism for event collection. | | agent.config.logLevel | Set logging verbosity. Valid log levels are: `trace`, `debug`, `info`, `warn`, `error`, `critical`, and `off`. | | agent.config.podAnnotationsAsTags | Provide a mapping of Kubernetes Annotations to Datadog Tags. `: ` | -| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes Labels to Datadog Tags. `: ` | +| agent.config.podLabelsAsTags | Provide a mapping of Kubernetes labels to Datadog tags. `: ` | | agent.config.resources.limits | Describes the maximum amount of compute resources allowed. [See the Kubernetes documentation][3]. | | agent.config.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified. Otherwise, it defaults to an implementation-defined value. [See the Kubernetes documentation][3]. | | agent.config.securityContext.allowPrivilegeEscalation | Controls whether a process can gain more privileges than its parent process. This Boolean directly controls if the `no_new_privs` flag will be set on the container process. `AllowPrivilegeEscalation` is true always when the container is both run as `Privileged`, and has `CAP_SYS_ADMIN`. | | agent.config.securityContext.capabilities.add | Added capabilities. | | agent.config.securityContext.capabilities.drop | Removed capabilities. | -| agent.config.securityContext.privileged | Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to `false`. | +| agent.config.securityContext.privileged | Run the container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to `false`. | | agent.config.securityContext.procMount | `procMount` denotes the type of proc mount to use for the containers. The default is `DefaultProcMount` which uses the container runtime defaults for read-only paths and masked paths. This requires the `ProcMountType` feature flag to be enabled. | | agent.config.securityContext.readOnlyRootFilesystem | Whether this container has a read-only root file system. Default is `false`. | | agent.config.securityContext.runAsGroup | The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. | @@ -66,7 +66,7 @@ spec: | agent.config.securityContext.windowsOptions.gmsaCredentialSpec | `GMSACredentialSpec` is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the `GMSACredentialSpecName` field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.config.securityContext.windowsOptions.gmsaCredentialSpecName | `GMSACredentialSpecName` is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.config.securityContext.windowsOptions.runAsUserName | The `UserName` in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. This field is beta-level and may be disabled with the `WindowsRunAsUserName` feature flag. | -| agent.config.tags | List of tags to attach to every metric, event, and service check collected by this Agent. See the [Tagging documentation][8] | +| agent.config.tags | List of tags to attach to every metric, event, and service check collected by this Agent. See the [Tagging documentation][8]. | | agent.config.tolerations | If specified, the Agent pod's tolerations. | | agent.config.volumeMounts | Specify additional volume mounts in the Datadog Agent container. | | agent.config.volumes | Specify additional volumes in the Datadog Agent container. | @@ -78,24 +78,24 @@ spec: | agent.deploymentStrategy.canary.paused | | | agent.deploymentStrategy.canary.replicas | | | agent.deploymentStrategy.reconcileFrequency | The reconcile frequency of the ExtendDaemonSet. | -| agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maxium number of pods created in parallel. Default value is 250. | -| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | `maxPodSchedulerFailure` is the maxinum number of pods scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute. | +| agent.deploymentStrategy.rollingUpdate.maxParallelPodCreation | The maximum number of pods created in parallel. Default value is 250. | +| agent.deploymentStrategy.rollingUpdate.maxPodSchedulerFailure | `maxPodSchedulerFailure` is the maximum number of pods scheduled on its Node due to a scheduler failure: resource constraints. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute. | | agent.deploymentStrategy.rollingUpdate.maxUnavailable | The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. | | agent.deploymentStrategy.rollingUpdate.slowStartAdditiveIncrease | Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Default value is 5. | | agent.deploymentStrategy.rollingUpdate.slowStartIntervalDuration | The duration interval. Default value is 1min. | | agent.deploymentStrategy.updateStrategyType | The update strategy used for the DaemonSet. | | agent.dnsConfig.nameservers | A list of DNS name server IP addresses. This are appended to the base nameservers generated from `dnsPolicy`. Duplicated nameservers are removed. | -| agent.dnsConfig.options | A list of DNS resolver options. This are merged with the base options generated from `dnsPolicy`. Duplicated entries will be removed. Resolution options given in `options` override those that appear in the base `dnsPolicy`. | +| agent.dnsConfig.options | A list of DNS resolver options. These are merged with the base options generated from `dnsPolicy`. Duplicated entries will be removed. Resolution options given in `options` override those that appear in the base `dnsPolicy`. | | agent.dnsConfig.searches | A list of DNS search domains for host-name lookup. This are appended to the base search paths generated from `dnsPolicy`. Duplicated search paths are removed. | | agent.dnsPolicy | Set DNS policy for the pod. Defaults to `ClusterFirst`. Valid values are `ClusterFirstWithHostNet`, `ClusterFirst`, `Default`, or `None`. DNS parameters given in `dnsConfig` are merged with the policy selected with `dnsPolicy`. To have DNS options set along with `hostNetwork`, you have to specify `dnsPolicy` explicitly to `ClusterFirstWithHostNet`. | | agent.env | Environment variables for all Datadog Agents. [See the Docker environment variables documentation][2]. | | agent.hostNetwork | Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Defaults to `false`. | | agent.hostPID | Use the host's PID namespace. Optional: Defaults to `false`. | -| agent.image.name | Define the image to use Use `datadog/agent:latest` for Datadog Agent 6. Use `datadog/dogstatsd:latest` for stand-alone Datadog Agent DogStatsD. Use `datadog/cluster-agent:latest` for Datadog Cluster Agent. | +| agent.image.name | Define the image to use `datadog/agent:latest` for Datadog Agent 6. Use `datadog/dogstatsd:latest` for stand-alone Datadog Agent DogStatsD. Use `datadog/cluster-agent:latest` for Datadog Cluster Agent. | | agent.image.pullPolicy | The Kubernetes pull policy. Use `Always`, `Never`, or `IfNotPresent`. | -| agent.image.pullSecrets | It is possible to specify Docker registry credentials. [See the Kubernetes documentation][9]. | +| agent.image.pullSecrets | Specifies the Docker registry credentials. [See the Kubernetes documentation][9]. | | agent.log.containerCollectUsingFiles | Collect logs from files in `/var/log/pods` instead of using container runtime API. This is usually the most efficient way of collecting logs. See the [Log Collection][10] documentation. Default: `true`. | -| agent.log.containerLogsPath | Allow log collection from container log path. Set to a different path if not using docker runtime. See the [Kubernetes documentation][11]. Defaults to `/var/lib/docker/containers`. | +| agent.log.containerLogsPath | Allow log collection from the container log path. Set to a different path if not using docker runtime. See the [Kubernetes documentation][11]. Defaults to `/var/lib/docker/containers`. | | agent.log.enabled | Enable this to activate Datadog Agent log collection. See the [Log Collection][10] documentation. | | agent.log.logsConfigContainerCollectAll | Enable this to allow log collection for all containers. See the [Log Collection][10] documentation. | | agent.log.openFilesLimit | Set the maximum number of logs files that the Datadog Agent tails up to. Increasing this limit can increase resource consumption of the Agent. See the [Log Collection][10] documentation. Defaults to 100. | @@ -108,7 +108,7 @@ spec: | agent.process.resources.requests | Describes the minimum amount of compute resources required. If `requests` is omitted for a container, it defaults to `limits` if that is explicitly specified, otherwise to an implementation-defined value. See the [Kubernetes documentation][3]. | | agent.rbac.create | Used to configure RBAC resources creation. | | agent.rbac.serviceAccountName | Used to set up the service account name to use `Ignored` if the field `Create` is true. | -| agent.systemProbe.appArmorProfileName | Specify a AppArmor profile. | +| agent.systemProbe.appArmorProfileName | Specify an AppArmor profile. | | agent.systemProbe.bpfDebugEnabled | Logging for kernel debug. | | agent.systemProbe.conntrackEnabled | Enable the system-probe agent to connect to the netlink/conntrack subsystem to add NAT information to connection data. [See the Conntrack documentation][13]. | | agent.systemProbe.debugPort | Specify the port to expose pprof and expvar for system-probe agent. | @@ -134,7 +134,7 @@ spec: | agent.systemProbe.securityContext.seLinuxOptions.user | SELinux user label that applies to the container. | | agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpec | `GMSACredentialSpec` is where the [GMSA admission webhook][7] inlines the contents of the GMSA credential spec named by the `GMSACredentialSpecName` field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | | agent.systemProbe.securityContext.windowsOptions.gmsaCredentialSpecName | `GMSACredentialSpecName` is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag. | -| agent.systemProbe.securityContext.windowsOptions.runAsUserName | The `UserName` in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. This field is beta-level and may be disabled with the `WindowsRunAsUserName` feature flag. | +| agent.systemProbe.securityContext.windowsOptions.runAsUserName | Use the `UserName` in Windows to run the entry point of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in `PodSecurityContext`. If set in both `SecurityContext` and `PodSecurityContext`, the value specified in `SecurityContext` takes precedence. This field is beta-level and may be disabled with the `WindowsRunAsUserName` feature flag. | | agent.useExtendedDaemonset | Use ExtendedDaemonset for Agent deployment. Default value is false. | | clusterAgent.additionalAnnotations | `AdditionalAnnotations` provide annotations that are added to the Cluster Agent Pods. | | clusterAgent.additionalLabels | `AdditionalLabels` provide labels that are added to the cluster checks runner Pods. | @@ -164,7 +164,7 @@ spec: | clusterAgent.deploymentName | Name of the Cluster Agent Deployment to create or migrate from. | | clusterAgent.image.name | Define the image to use. Use `datadog/agent:latest` for Datadog Agent 6. Use `datadog/dogstatsd:latest` for stand-alone Datadog Agent DogStatsD. Use `datadog/cluster-agent:latest` for Datadog Cluster Agent. | | clusterAgent.image.pullPolicy | The Kubernetes pull policy. Use `Always`, `Never`, or `IfNotPresent`. | -| clusterAgent.image.pullSecrets | It is possible to specify Docker registry credentials. See the [Kubernetes documentation][9]. | +| clusterAgent.image.pullSecrets | Specifies Docker registry credentials. See the [Kubernetes documentation][9]. | | clusterAgent.nodeSelector | Selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. See the [Kubernetes documentation][15]. | | clusterAgent.priorityClassName | If specified, indicates the pod's priority. `system-node-critical` and `system-cluster-critical` are two special keywords that indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a `PriorityClass` object with that name. If not specified, the pod priority will be default or zero if there is no default. | | clusterAgent.rbac.create | Used to configure RBAC resources creation. | From 44ed9a508c68ee6099b5d6e8c8076a25f574601c Mon Sep 17 00:00:00 2001 From: Cecilia Watt Date: Thu, 1 Oct 2020 17:25:14 -0700 Subject: [PATCH 26/26] adding beta alerts --- content/en/agent/guide/operator-advanced.md | 4 +++- content/en/agent/kubernetes/_index.md | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/content/en/agent/guide/operator-advanced.md b/content/en/agent/guide/operator-advanced.md index 4033ec4a5e54b..34dc59f1c1120 100644 --- a/content/en/agent/guide/operator-advanced.md +++ b/content/en/agent/guide/operator-advanced.md @@ -7,7 +7,9 @@ further_reading: text: 'Datadog and Kubernetes' --- -[The Datadog Operator][1] is in public beta. The Datadog Operator is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. +
The Datadog Operator is in public beta. If you have any feedback or questions, contact Datadog support.
+ +[The Datadog Operator][1] is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. ## Prerequisites diff --git a/content/en/agent/kubernetes/_index.md b/content/en/agent/kubernetes/_index.md index 770366ceace1a..b34cf88f8826b 100644 --- a/content/en/agent/kubernetes/_index.md +++ b/content/en/agent/kubernetes/_index.md @@ -174,7 +174,9 @@ To install the Datadog Agent on your Kubernetes cluster: {{% /tab %}} {{% tab "Operator" %}} -[The Datadog Operator][1] is in public beta. The Datadog Operator is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. +
The Datadog Operator is in public beta. If you have any feedback or questions, contact Datadog support.
+ +[The Datadog Operator][1] is a way to deploy the Datadog Agent on Kubernetes and OpenShift. It reports deployment status, health, and errors in its Custom Resource status, and it limits the risk of misconfiguration thanks to higher-level configuration options. ## Prerequisites