-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_kms - stabilize and add integration tests #1052
aws_kms - stabilize and add integration tests #1052
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, there's a lot of formatting changes in the tests though (which are good changes!) so I'd like to see a second review before a merge in case I missed anything.
changelogs/fragments/1052-aws_kms-stabilize-integration-tests.yml
Outdated
Show resolved
Hide resolved
Considering the last CI run failed, it looks like the wait will still cause CI problems. Seems like I should just remove the wait and keep the tests marked as |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
recheck |
plugins/modules/aws_kms.py
Outdated
@@ -863,6 +904,10 @@ def update_key(connection, module, key): | |||
changed |= update_grants(connection, module, key, module.params.get('grants'), module.params.get('purge_grants')) | |||
changed |= update_key_rotation(connection, module, key, module.params.get('enable_key_rotation')) | |||
|
|||
# Pause to wait for updates | |||
if changed and not module.check_mode: | |||
sleep(5) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of using sleep here because it isn't probably solving the issue. If sometimes the update operation will take more than 5 seconds we will be in the same situation. Maybe @tremble can suggest a better approach instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we'd need to do here is a build a comparison function that constructs what we expect to returned by get_key_details()
, and checks to see if result
matches (or at least the relevant keys). If it doesn't, retry (some limited number of times).
Or else change every update/ensure function to do something similar - "connection.update_key_description, then connection.describe_key and check the description". I don't like this option much though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we'd need to do here is a build a comparison function that constructs what we expect to returned by get_key_details(), and checks to see if result matches (or at least the relevant keys). If it doesn't, retry (some limited number of times).
sleep
is used like this in s3_lifecycle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
another option is retrying on KMSInvalidStateException, like what I proposed for iam_user
and rds_instance
does
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tremble - tagging you here because this is simliar to the igw issue where on one call of getting the key, description is new-description, then on following call, description is old-description. same with tags, policies, etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jillr tried both of those options and still errors in integration tests. It'll return the updated fields, then on subsequent calls, return old fields.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jatorcasso Gotcha. In that case the sleep sounds justified, can you please update the code comment to briefly explain that the API stays inconsistent too long to do a simple check and we need to wait to be confident the change has propagated? Otherwise we might try to change it later and run into this problem again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To summarize the OT (Slack) conversation around this - it seems that all of the proposed workarounds regarding the wait period after updates and inconsistencies in whats being reflected when fetching key metadata don't fix the problem (integration tests still failing on idempotency tests). Documenting this in the module as well as adding waits in integration tests was decided to be the best solution (albeit not a great one).
docs update
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for all your work on this @jatorcasso!
Backport to stable-3: 💚 backport PR created✅ Backport PR branch: Backported as #1099 🤖 @patchback |
aws_kms - stabilize and add integration tests SUMMARY update/add integration tests for various actions return list of policies as a list of jsons for clarity sleep on updates (no kms waiter, attempted manual waiters but still had test failures) ISSUE TYPE Feature Pull Request COMPONENT NAME aws_kms ADDITIONAL INFORMATION I tried adding manual waiters for different actions like waiting for tags to be correct, policy to be updated, etc, but would still fail ~half of the time on idempotency tests. seems like after updating the key's status is a bit buggy. Reviewed-by: Jill R <None> Reviewed-by: Mark Chappell <None> Reviewed-by: Joseph Torcasso <None> Reviewed-by: Mandar Kulkarni <[email protected]> Reviewed-by: Markus Bergholz <[email protected]> Reviewed-by: Alina Buzachis <None> (cherry picked from commit 558a025)
[PR #1052/558a0252 backport][stable-3] aws_kms - stabilize and add integration tests This is a backport of PR #1052 as merged into main (558a025). SUMMARY update/add integration tests for various actions return list of policies as a list of jsons for clarity sleep on updates (no kms waiter, attempted manual waiters but still had test failures) ISSUE TYPE Feature Pull Request COMPONENT NAME aws_kms ADDITIONAL INFORMATION I tried adding manual waiters for different actions like waiting for tags to be correct, policy to be updated, etc, but would still fail ~half of the time on idempotency tests. seems like after updating the key's status is a bit buggy. Reviewed-by: Alina Buzachis <None>
…e_cloudwatchevent Migrate cloudwatchevent* modules and tests Depends-On: ansible/zuul-config#443 Migrate cloudwatchevent* modules and tests Reviewed-by: Mike Graves <[email protected]>
SUMMARY
ISSUE TYPE
COMPONENT NAME
aws_kms
ADDITIONAL INFORMATION
I tried adding manual waiters for different actions like waiting for tags to be correct, policy to be updated, etc, but would still fail ~half of the time on idempotency tests. seems like after updating the key's status is a bit buggy.