Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SOPS multi-tenancy #324

Open
moolen opened this issue Apr 15, 2021 · 6 comments
Open

SOPS multi-tenancy #324

moolen opened this issue Apr 15, 2021 · 6 comments

Comments

@moolen
Copy link

moolen commented Apr 15, 2021

Context:

  • running on EKS, using IRSA and pod-identity-webhook
  • multiple tenants should be onboarded onto the cluster
  • we're enforcing spec.serviceAccountName
  • one tenant should not be able to decrypt secrets of other tenants

Problem:
Right now, kustomize-controller uses a local set of credentials that are defined by environment variables. These credentials allow the controller to make API calls against AWS.
I want to onboard multiple tenants and they should not be able to decrypt the secrets of other tenants. However, since they share the same set of credentials they can decrypt secrets of other tenants.

With GPG we have a better tenant isolation due to the fact that Kind=Kustomization is namespaced and it stores a local reference to a gpg private key. (i.e. one gpg-key per namespace).

Proposed solutions:

(1) use multiple flux instances
This is meh for obvious reasons. Question: can i have multiple kustomize-controllers ? Would you support having sth. similar to ingressClassName to shard responsibilities between controllers?

(2) use serviceAccount/TokenRequest API
Since the Kustomization refers to an spec.serviceAccountName we could implement sth. similar to the service account token volume projection API in Kubelet (docs).
We can feed those tokens into the aws sdk to do a sts:assumeRoleWithWebIdentity this needs to be implemented in the sops repo, of course.
With that we basicially mimic the behavior of the eks-pod-identity-webhook but have the ability to assume multiple roles within one pod. I'm not sure yet what implications that has to elevate the privileges to use the token request api.

@stefanprodan
Copy link
Member

@moolen the current implementation assumes that each tenant uses OpenPGP or Age encryption keys as Kubernetes secrets are subject to RBAC and so does Kustomization objects. A tenant has no means of accessing another tenant secrets due to secretRef being a local reference, and we also isolate the keys for each reconciliation inside kustomize-controller container.

For KMS access with IRSA, ideally option 2 should be implemented in SOPS then in Flux, I don't think running one controller instance per tenant is a good solution.

@travisgroth
Copy link

Before seeing this issue, I opened https://github.com/fluxcd/flux2/issues/1478. Not sure if that should move to this repo but it seems related to the ask here. Would something like that support your use case @moolen?

FWIW, I think currently you can run a kustomize-controller per namespace by setting --watch-all-namespaces=false for multi-tenant use cases. I'd prefer something with less operational overhead when you're primarily worried about segmenting secrets access.

@dquagebeur
Copy link
Contributor

We would like to use Azure Keyvault instead of pgp keys. What do you think about this proposal:
Integrate Azure Keyvault SDK in the keyservice to encrypt/decrypt using the azure crendentials stored in a secret from the current namespace (behaviour similar to current pgp implementation).
Is it compatible with how SOPS works with AKV ? (I don't know much about SOPS)

@dquagebeur
Copy link
Contributor

Hi @stefanprodan , could you give us your feedback ? does it worth to invest in this direction ?

@dquagebeur
Copy link
Contributor

I have made an implementation, could you give me your feedback and if it could be merged : commit
Rules:

  • if a secretRef contains a secret named *.json, the json is written to tmp folder (like .asc for pgp)
  • if a secret is encrypted with an azureKeyvaultKey, I'm checking if an azure_kv.json exist, if yes I use Azure SDK and the json as credentials to decrypt, if no I let the default decrypt workflow...

@stefanprodan
Copy link
Member

Hey @dquagebeur I see some issues your implementation, for example using os.Setenv which affects all tenants, if we can get rid of it then this looks good to me. Please open a PR so others can comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants