-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KMS resource Key Policy propagation consistent fail #21225
Comments
terraform-provider-aws/aws/internal/service/kms/waiter/waiter.go Lines 96 to 122 in 040f37e
Potentially caused by |
I'm facing this issue but can't downgrade to 3.52 because I need to use autoscaling_group_tag which has only been added in v3.56.0. I attempted to debug the issue and in my case it was related to me using the following in my policy (note the boolean represented as an actual bool) {
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
}
}
} I debugged I changed my condition to the following: {
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": "true"
}
}
} and it seems to work now. Terraform was also continously showing a diff before this change but I assumed this to be unrelated. Changing This might not be the cause in the case of @AshMenhennett but may be of help to others. |
Hey @CrawX, That's awesome to find workaround AND use the new features. I also appreciate that you took the time to debug it, as that's helpful us as well! Seeing this thread looks like it's related, when I find some time I'll see if we can remedy as you have. |
Thanks to @CrawX's pointer, we also fixed our issue and now use
we now use
which allows us to avoid updates-in-place related to the principals block of our key policies like the one below.
So it looks like the validation between user_id and arn is either not working or does so after the 5m0s timeout. |
As @CrawX mentioned study your plan differences closely on your key policy and try to remove them. We ended up having defined our KMS key policy with 2 AWS principal fields instead of a single AWS field as an array. |
The failure to converge with a |
This functionality has been released in v3.69.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
Still occurring for me |
Same. |
Any update @ewbankkit?
Error:
|
Same for me to on v3.74.3. |
Still occurs for me at v3.75.1 |
/reopen |
This is also occurring for me on hashicorp/aws v4.1 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform CLI and Terraform AWS Provider Version
Terraforn Core:
1.0.5
AWS Provider:
3.62.0
Affected Resource(s)
Terraform Configuration Files
Same as per #20588 which isn't resolved in latest provider.
Workaround is to roll back to 3.52 which allows to create a key with policy having multiple statements.
Note: providing a policy that is equivalent to the default (non lock out) policy will create successfully, when adding an additional statement, provisioning will fail.
There are new comments on #20588 that provide additional detail.
Thanks for helping resolve.
Expected Behavior
I should be able to create a CMK key with a custom policy.
Actual Behavior
Provisioning fails with failure to propagate key policy.
Steps to Reproduce
Create a aws_key_resource with the policy attribute set to a policy with multiple statements.
References
The text was updated successfully, but these errors were encountered: