-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
r/aws_elasticache_cluster: Add support for in-transit encryption #26987
r/aws_elasticache_cluster: Add support for in-transit encryption #26987
Conversation
Community NoteVoting for Prioritization
For Submitters
|
f95aa71
to
a90eb9c
Compare
a90eb9c
to
c7c7595
Compare
Whats stopping the merge? |
would be really good to understand when this is likely to be merged in? |
Is there anything I can do to help get this in? It looks like it passed all checks, but has conflicts. I'd be happy to help resolve them in order to get this in sooner rather than later. (It's an issue for certain Compliance Baselines, that require TLS-Everywhere, and enabling TLS can only be done for Memcached at cluster-create...) |
fc274b7
to
a4ab066
Compare
ccf3ae3
to
79ab851
Compare
Thanks for doing this @tmccombs. This is a huge help and let me know if there's any way I can help get it merged. |
All the checks are passing - its able to be merged. Is there any way at all possible to get his merged in today? This is an absolute blocker for anyone using terraform and elasticache Memcached in any remotely compliant environment that required TLS in-transit. Happy to help in any way to get this through the finish line. |
+1 |
What is preventing this pull request to be merged ? |
@ewbankkit @justinretzolk You appear to be an active contributors to this repo -- any chance we could get your attention on merging this PR? 🙏🏻 Thank you for your consideration. 🙇🏻♂️ |
@tmccombs @prashanthaitha24 is this ticket being held back by the second golang-ci lint check that was cancelled after 360m? Does that test just need to be re-run, assuming the timeout was due to a one-off failure? I can't think of any other reason this ticket has lingered with so many thumbs ups -- 83 thumbs ups is much than I have seen for enhancements merged into recent releases. |
I don't know any more than you about what needs to happen for this to be merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
% make testacc PKG=elasticache TESTS=TestAccElastiCacheCluster_ ACCTEST_PARALLELISM=5
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./internal/service/elasticache/... -v -count 1 -parallel 5 -run='TestAccElastiCacheCluster_' -timeout 180m
--- PASS: TestAccElastiCacheCluster_Engine_memcached (622.69s)
=== CONT TestAccElastiCacheCluster_outpost_memcached
acctest.go:1102: skipping since no Outposts found
--- SKIP: TestAccElastiCacheCluster_outpost_memcached (0.77s)
=== CONT TestAccElastiCacheCluster_outpostID_redis
acctest.go:1102: skipping since no Outposts found
--- SKIP: TestAccElastiCacheCluster_outpostID_redis (0.18s)
=== CONT TestAccElastiCacheCluster_outpostID_memcached
acctest.go:1102: skipping since no Outposts found
--- SKIP: TestAccElastiCacheCluster_outpostID_memcached (0.17s)
=== CONT TestAccElastiCacheCluster_outpost_redis
acctest.go:1102: skipping since no Outposts found
--- SKIP: TestAccElastiCacheCluster_outpost_redis (0.19s)
=== CONT TestAccElastiCacheCluster_ParameterGroupName_default
--- PASS: TestAccElastiCacheCluster_NumCacheNodes_decrease (953.68s)
=== CONT TestAccElastiCacheCluster_snapshotsWithUpdates
--- PASS: TestAccElastiCacheCluster_Engine_Redis_LogDeliveryConfigurations (1130.35s)
=== CONT TestAccElastiCacheCluster_port
--- PASS: TestAccElastiCacheCluster_ParameterGroupName_default (639.97s)
=== CONT TestAccElastiCacheCluster_ipDiscovery
--- PASS: TestAccElastiCacheCluster_NodeTypeResize_memcached (1274.75s)
=== CONT TestAccElastiCacheCluster_ReplicationGroupID_singleReplica
--- PASS: TestAccElastiCacheCluster_ReplicationGroupID_multipleReplica (1470.47s)
=== CONT TestAccElastiCacheCluster_ReplicationGroupID_availabilityZone
--- PASS: TestAccElastiCacheCluster_snapshotsWithUpdates (701.67s)
=== CONT TestAccElastiCacheCluster_AZMode_memcached
--- PASS: TestAccElastiCacheCluster_port (577.87s)
=== CONT TestAccElastiCacheCluster_NumCacheNodes_redis
--- PASS: TestAccElastiCacheCluster_NumCacheNodes_redis (1.80s)
=== CONT TestAccElastiCacheCluster_NodeTypeResize_redis
--- PASS: TestAccElastiCacheCluster_ipDiscovery (680.17s)
=== CONT TestAccElastiCacheCluster_EngineVersion_redis
--- PASS: TestAccElastiCacheCluster_AZMode_memcached (630.63s)
=== CONT TestAccElastiCacheCluster_Engine_None
--- PASS: TestAccElastiCacheCluster_Engine_None (1.00s)
=== CONT TestAccElastiCacheCluster_PortRedis_default
--- PASS: TestAccElastiCacheCluster_ReplicationGroupID_singleReplica (1498.63s)
=== CONT TestAccElastiCacheCluster_EngineVersion_memcached
--- PASS: TestAccElastiCacheCluster_PortRedis_default (619.20s)
=== CONT TestAccElastiCacheCluster_tagWithOtherModification
--- PASS: TestAccElastiCacheCluster_ReplicationGroupID_availabilityZone (1480.43s)
=== CONT TestAccElastiCacheCluster_AZMode_redis
--- PASS: TestAccElastiCacheCluster_NodeTypeResize_redis (1459.23s)
=== CONT TestAccElastiCacheCluster_TransitEncryption
--- PASS: TestAccElastiCacheCluster_AZMode_redis (671.30s)
=== CONT TestAccElastiCacheCluster_Engine_redis_v5
--- PASS: TestAccElastiCacheCluster_tagWithOtherModification (743.37s)
=== CONT TestAccElastiCacheCluster_Engine_redis
--- PASS: TestAccElastiCacheCluster_TransitEncryption (671.85s)
=== CONT TestAccElastiCacheCluster_tags
--- PASS: TestAccElastiCacheCluster_EngineVersion_memcached (1367.36s)
=== CONT TestAccElastiCacheCluster_Redis_finalSnapshot
--- PASS: TestAccElastiCacheCluster_Engine_redis (650.56s)
=== CONT TestAccElastiCacheCluster_Redis_autoMinorVersionUpgrade
--- PASS: TestAccElastiCacheCluster_Engine_redis_v5 (681.70s)
=== CONT TestAccElastiCacheCluster_vpc
--- PASS: TestAccElastiCacheCluster_tags (634.17s)
=== CONT TestAccElastiCacheCluster_multiAZInVPC
--- PASS: TestAccElastiCacheCluster_Redis_autoMinorVersionUpgrade (674.28s)
=== CONT TestAccElastiCacheCluster_Memcached_finalSnapshot
--- PASS: TestAccElastiCacheCluster_Memcached_finalSnapshot (1.85s)
=== CONT TestAccElastiCacheCluster_NumCacheNodes_increaseWithPreferredAvailabilityZones
--- PASS: TestAccElastiCacheCluster_Redis_finalSnapshot (846.20s)
=== CONT TestAccElastiCacheCluster_NumCacheNodes_increase
--- PASS: TestAccElastiCacheCluster_EngineVersion_redis (3046.54s)
--- PASS: TestAccElastiCacheCluster_vpc (688.39s)
--- PASS: TestAccElastiCacheCluster_multiAZInVPC (865.81s)
--- PASS: TestAccElastiCacheCluster_NumCacheNodes_increase (1015.34s)
--- PASS: TestAccElastiCacheCluster_NumCacheNodes_increaseWithPreferredAvailabilityZones (1069.62s)
PASS
ok github.com/hashicorp/terraform-provider-aws/internal/service/elasticache 6049.030s
Thanks for your contribution, @tmccombs! 👏 Also, appreciate your patience in getting this reviewed. |
This functionality has been released in v5.16.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Fixes: #26367
Description
Adds a
transit_encryption_enabled
attribute to the aws_elasticache_cluster resource, to allow creating memcached clusters with transit encryption enabled.Relations
Closes #26367
Closes #29403