Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secrets Provider Configuration via annotations M1 #330

Closed
3 tasks
doodlesbykumbi opened this issue Jun 10, 2021 · 1 comment · Fixed by #359
Closed
3 tasks

Secrets Provider Configuration via annotations M1 #330

doodlesbykumbi opened this issue Jun 10, 2021 · 1 comment · Fixed by #359

Comments

@doodlesbykumbi
Copy link
Contributor

doodlesbykumbi commented Jun 10, 2021

Currently, Secrets Provider configuration is provided via environment variables. The goal of this issue is to make it possible to get this configuration from annotations.

Context

The Secrets Provider container is not directly aware of the annotations from the manifest. The annotations are passed down via
the downwards API. The format they take in file form is described as follows

metadata.annotations - all of the pod's annotations, formatted as annotation-key="escaped-annotation-value" with one annotation per line

The code for the formatting is available in the Kubernetes source. This might be useful for parsing the annotations.

Requirements

In order to support annotations as a mechanism for configuration we need to assume, as input, a file from the downward api containing the container's annotations. This files should be parsed and used to populate the same internal representations of configurations that were previously populated via environment variables.

Below is a mapping of internal configuration fields and source annotations.

Container config:

PodNamespace - MY_POD_NAMESPACE (still comes from downward API)
RetryCountLimit - conjur.org/retry-count-limit
RetryIntervalSec - conjur.org/retry-interval-sec
StoreType - conjur.org/secrets-destination

Kubernetes Secrets config:

RequiredK8sSecrets – conjur.org/k8s-secrets

The places in the code where this configuration is used are:

secretsConfig, err := secretsConfigProvider.NewFromEnv()

Acceptance Criteria

  • An annotation parser exists for extracting arbitrary annotations from a downward API annotation file. This should include a test case to validate this behavior.
  • Annotations can be used instead of environment variables. This should include a test case to validate this behavior.
  • Annotations take precedence over environment variables. This should include a test case to validate this behavior.

Notes on testing

For the test cases we recommend writing unit tests. E2E tests are very expensive to validate every aspect of configuration via annotations. It's better for the bulk of the tests to be unit tests, and as a finish touch to add annotations to a single E2E smoke test.

You can use the downwards API to generate static fixtures for the unit tests. Alternatively, the code for formatting is available in the Kubernetes source. Additionally, the formatting logic is available in a public package so you could also dynamically generate your fixtures.

package main

import (
	"fmt"

	"k8s.io/kubernetes/pkg/fieldpath"
)
func main() {
	fmt.Println(fieldpath.FormatMap(map[string]string{"moo": "meow"}))
}
@diverdane
Copy link
Contributor

I'm wondering if we have to deal with fieldPath, or can we start with the assumption that Kubernetes/kubelet will provide us the annotations as a YAML file at /conjur/podinfo/annotations (containing a key/value pair for each annotation)?

Adding some more background:

The Secrets Provider container is exposed to its Pod annotations via the Kubernetes Downward API as described in this example:
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#store-pod-fields

For M1 Push-to-File, the volumeMount for Pod info will be set in Deployment manifests to:

    - mountPath: /conjur/podinfo
      name: conjur-podinfo

When the Pod is started up, the Kubernetes kubelet agent will inject all Pod annotations into a file called "annotations" in this volume, as a list of YAML key/value pairs.

So the Secrets Provider will see a file at location: /conjur/podinfo/annotations, and the content will look like the following YAML:

conjur.org/authn-identity: "host/conjur/authn-k8s/cluster/apps/inventory-api"
conjur.org/container-mode: "init"
conjur.org/secrets-destination: "file"
conjur.org/retry-count-limit: "5"

    <--- SNIP --->

conjur.org/conjur-secrets.my-cool-application: |
    - "prod/backend/url"
    - "prod/backend/port"
    - "prod/backend/password"
    - "prod/backend/username"

I think the workflow that we're looking for is something like the following. We'll do this processing once at the beginning of time, and save the results in a structure to be used later by other parts of SP code:

  • Check to see if directory /conjur/podinfo exists
    • If doesn't exist, print error that manifest is missing expected volume/volumeMount
  • If file /conjur/podinfo/annotations exists
    • YAML unmarshal the file contents into a map[string]string annotations map
    • Iterate through all annotations map entries:
      • If key begins with conjur.org/ but the remainder of the key is otherwise unknown, log INFO re. unknown annotation
    • Iterate through all possible SP settings that can
      • If setting is included in the annotations map from previous step:
        • Record setting in a structure that can be reference later by other SP code
      • Else, if setting not included in annotations map:
        • If corresponding env var is set, record that setting
      • Else, if default value is defined, record that setting

It's assumed that checks for required values and input validation on values is done wherever the settings are consumed.

Two more requirements that we can consider adding are:

  • Unknown Conjur annotation causes INFO level log
  • Multiline annotation values are parsed correctly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants