-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vault no longer respects AWS_ROLE_ARN or AWS_WEB_IDENTITY_TOKEN_FILE for AWS KMS #21478
Comments
We are running into a similar issue, but unfortunatelly the workaround is not working for us. I still get:
This is how our seal configuration looks like:
|
@ubajze I apologize, I was in a bit of a rush to solve this issue on our own side, and I missed that apparently the |
@dqsully it is working after adding the |
@hsimon-hashicorp I don't know the conventions here but would this be better labeled under |
We have the same issue after upgrade to 1.14.0. This config no longer works: seal "awskms" {
region = "${data.aws_region.current.name}"
kms_key_id = "${module.aws_kms_key.key_id}"
} |
This seems to be also relevant for the s3 storage backend which doesn't support the |
Same issue here. We're in the middle of trying to get IRSA incorporated into our environments when we upgraded our staging instance to 1.14.0 to account for this CVE Looks like this may be related to this issue about the same thing with the |
We have the same situation here. |
This also breaks the AWS secret backend. |
Hi folks, thanks for the issue report, the repro steps, and the comments about this problem! Our engineering teams are working on a fix to be included in an upcoming release. Thanks for your patience in the meantime! |
Resolved by #21951. The fix will be available in Vault v1.14.1 |
Hi , |
I still can reproduce the issue. Used the latest helm chart v0.25.0 with Vault v1.14.0 and v1.14.1.
|
I got it working with Vault v1.14.1 |
* add eks cluster autoscaler * add cluster autoscaler name * add ploicy for defualt node group * add cluster autoscaler policy * attach more policies * add support for gitlab * feat: kubefirst pro chart (#807) * set next macro chart for kubefirst - 2.6.2-rc9 * set next macro chart for kubefirst - 2.6.2-rc10 * set next macro chart for kubefirst - 2.6.2-rc11 * set next macro chart for kubefirst - 2.6.2-rc12 * set next macro chart for kubefirst - 2.6.2-rc13 * set next macro chart for kubefirst - 2.6.2-rc14 * set next macro chart for kubefirst - 2.6.2-rc15 * set next macro chart for kubefirst - 2.6.2-rc16 * set next macro chart for kubefirst - 2.6.2-rc17 * set next macro chart for kubefirst - 2.6.2-rc18 * set next macro chart for kubefirst - 2.6.2-rc19 * set next macro chart for kubefirst - 2.6.2-rc20 * set next macro chart for kubefirst - 2.6.2-rc21 * set next macro chart for kubefirst - 2.6.2-rc22 * fix: wait label (#809) * set next macro chart for kubefirst - 2.6.2-rc23 * set next macro chart for kubefirst - 2.6.2-rc24 * set next macro chart for kubefirst - 2.6.2-rc25 * set next macro chart for kubefirst - 2.6.2-rc26 * set next macro chart for kubefirst - 2.6.2-rc27 * set next macro chart for kubefirst - 2.6.2-rc28 * add gpu and ollama * add civo ai and ollama * feat:add ai for gitlab * add inline ingress and rename ai to gpu * add sync wave * fix gpu template * change k8s version * add comma * fix name * fix irsa for pro api * add annotation for api * edit structure of policy * fix gpu gitlab * scope down permission policy for vault sa * fix: update vault version 1.14.1 hashicorp/vault#21478 * add comma * feat: create irsa for cluster-autoscaler * add pro to api sa * add pro to api sa --------- Co-authored-by: Cristhian Fernández <[email protected]> Co-authored-by: konstruct-bot <[email protected]>
* add eks cluster autoscaler * add cluster autoscaler name * add ploicy for defualt node group * add cluster autoscaler policy * attach more policies * add support for gitlab * feat: kubefirst pro chart (#807) * set next macro chart for kubefirst - 2.6.2-rc9 * set next macro chart for kubefirst - 2.6.2-rc10 * set next macro chart for kubefirst - 2.6.2-rc11 * set next macro chart for kubefirst - 2.6.2-rc12 * set next macro chart for kubefirst - 2.6.2-rc13 * set next macro chart for kubefirst - 2.6.2-rc14 * set next macro chart for kubefirst - 2.6.2-rc15 * set next macro chart for kubefirst - 2.6.2-rc16 * set next macro chart for kubefirst - 2.6.2-rc17 * set next macro chart for kubefirst - 2.6.2-rc18 * set next macro chart for kubefirst - 2.6.2-rc19 * set next macro chart for kubefirst - 2.6.2-rc20 * set next macro chart for kubefirst - 2.6.2-rc21 * set next macro chart for kubefirst - 2.6.2-rc22 * fix: wait label (#809) * set next macro chart for kubefirst - 2.6.2-rc23 * set next macro chart for kubefirst - 2.6.2-rc24 * set next macro chart for kubefirst - 2.6.2-rc25 * set next macro chart for kubefirst - 2.6.2-rc26 * set next macro chart for kubefirst - 2.6.2-rc27 * set next macro chart for kubefirst - 2.6.2-rc28 * add gpu and ollama * add civo ai and ollama * feat:add ai for gitlab * add inline ingress and rename ai to gpu * add sync wave * fix gpu template * change k8s version * add comma * fix name * fix irsa for pro api * add annotation for api * edit structure of policy * fix gpu gitlab * scope down permission policy for vault sa * fix: update vault version 1.14.1 hashicorp/vault#21478 * add comma * feat: create irsa for cluster-autoscaler * add pro to api sa * add pro to api sa --------- Co-authored-by: Cristhian Fernández <[email protected]> Co-authored-by: konstruct-bot <[email protected]>
Describe the bug
AWS KMS seals no longer respect the
AWS_ROLE_ARN
orAWS_WEB_IDENTITY_TOKEN_FILE
environment variables, which are required for assuming IAM roles via Kubernetes ServiceAccount tokens. Instead, Vault attempts to use the EC2 instance's IAM role (if available) to access the KMS key instead of using the Kubernetes ServiceAccount.To Reproduce
Steps to reproduce the behavior:
"awskms"
seal, setting onlykms_key_id
, and adding a ServiceAccount annotationeks.amazonaws.com/role-arn: <IAM role ARN>
Expected behavior
Vault should assume the IAM role configured in the Kubernetes ServiceAccount annotation and referenced by
AWS_ROLE_ARN
(injected by EKS because of the annotation), using the Kubernetes ServiceAccount token file referenced byAWS_WEB_IDENTITY_TOKEN_FILE
(also injected by EKS) for authentication with AWS.Environment:
Vault server configuration file(s):
Additional context
There is an easy workaround for this bug, which is to set
role_arn
andweb_identity_token_file
in the seal settings like so:Also, as far as I could trace it, the issue seems to come from this list of approved(?) environment variables for the AWS KMS wrapper?: 254d8f8#diff-8669cb5f3518deb7d1841c405e7e8b222348751cf85f81e6077a1184e9ed767dR15-R23
Hopefully the fix is as easy as adding
AWS_WEB_IDENTITY_TOKEN_FILE
andAWS_ROLE_ARN
to that list.The text was updated successfully, but these errors were encountered: