Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secondary Elasticache Cluster will get recreated after each apply #18075

Closed
OmarSalka opened this issue Mar 12, 2021 · 5 comments · Fixed by #18361
Closed

Secondary Elasticache Cluster will get recreated after each apply #18075

OmarSalka opened this issue Mar 12, 2021 · 5 comments · Fixed by #18361
Assignees
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.
Milestone

Comments

@OmarSalka
Copy link

OmarSalka commented Mar 12, 2021

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform v0.14.6, provider v3.32.0

Affected Resource(s)

aws_elasticache_replication_group

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

# Global
resource "aws_elasticache_global_replication_group" "global" {
  count                              = var.ELASTICACHE ? 1 : 0
  provider                           = aws.primary
  global_replication_group_id_suffix = "XXXXXXXX"
  primary_replication_group_id       = aws_elasticache_replication_group.primary[0].id
}

# Primary
resource "aws_elasticache_replication_group" "primary" {
  count                         = var.ELASTICACHE ? 1 : 0
  provider                      = aws.primary
  replication_group_id          = "XXXXXXXX"
  replication_group_description = "XXXXXXXX"
  engine                        = "redis"
  engine_version                = "6.0.5"#"6.x"
  node_type                     = "cache.m5.large"
  number_cache_clusters         = var.ENVIRONMENT_TYPE == "nonprod" ? 1 : 3
#   parameter_group_name          = "XXXXXX"
  port                          = 6379
  subnet_group_name             = local.primary_elasticache_rg_subnet_group
  security_group_ids            = [XXXXXX, XXXXXX]
  multi_az_enabled              = var.ENVIRONMENT_TYPE == "nonprod" ? false : true
  automatic_failover_enabled    = var.ENVIRONMENT_TYPE == "nonprod" ? false : true
  apply_immediately             = var.ENVIRONMENT_TYPE == "nonprod" ? true : false
  at_rest_encryption_enabled    = true
  transit_encryption_enabled    = true
  auth_token                    = XXXXXX
  snapshot_retention_limit      = X
  snapshot_window               = "XXXXXX"
  final_snapshot_identifier     = "XXXXXXX"
  maintenance_window            = "XXXXXXXX"
}

#Secondary
resource "aws_elasticache_replication_group" "secondary_rg_1" {
  count                         = var.CREATE_SECONDARY_REGION_1 && var.ELASTICACHE ? 1 : 0
  provider                      = aws.secondary_1
  replication_group_id          = "XXXXXX"
  replication_group_description = "XXXXXX"
  global_replication_group_id   = aws_elasticache_global_replication_group.global[0].global_replication_group_id
  number_cache_clusters         = var.ENVIRONMENT_TYPE == "nonprod" ? 1 : 3
#   parameter_group_name          = "XXXXXX"
  port                          = 6379
  subnet_group_name             = local.secondary_elasticache_rg_1_subnet_group
  security_group_ids            = [XXXXX, XXXXX]
  multi_az_enabled              = var.ENVIRONMENT_TYPE == "nonprod" ? false : true
  apply_immediately             = var.ENVIRONMENT_TYPE == "nonprod" ? true : false
  auth_token                    = XXXXXXX
  snapshot_retention_limit      = X
  snapshot_window               = "XXXXX"
  final_snapshot_identifier     = "XXXXXX"
  maintenance_window            = "XXXXXXX"
}

Terraform Plan

# aws_elasticache_replication_group.secondary_rg_1[0] must be replaced
-/+ resource "aws_elasticache_replication_group" "secondary_rg_1" {
      ~ arn                            = "xxxxxxxxxxxx" -> (known after apply)
      ~ at_rest_encryption_enabled     = true -> false # forces replacement
      ~ cluster_enabled                = false -> (known after apply)
      + configuration_endpoint_address = (known after apply)
      ~ engine_version                 = "6.0.5" -> (known after apply)
      ~ id                             = "xxxxxxxxxxx" -> (known after apply)
      ~ member_clusters                = [
          - "xxxxxxxx",
        ] -> (known after apply)
      ~ node_type                      = "cache.m5.large" -> (known after apply)
      ~ parameter_group_name           = "xxxxxxxx" -> (known after apply)
      ~ primary_endpoint_address       = "xxxxxxxxx" -> (known after apply)
      ~ reader_endpoint_address        = "xxxxxxxxx" -> (known after apply)
      ~ security_group_names           = [] -> (known after apply)
      - tags                           = {} -> null
      ~ transit_encryption_enabled     = true -> false # forces replacement
        # (17 unchanged attributes hidden)

      ~ cluster_mode {
          ~ num_node_groups         = 1 -> (known after apply)
          ~ replicas_per_node_group = 0 -> (known after apply)
        }
    }

Expected Behavior

Not force a recreation of the resource if nothing has changed

Actual Behavior

Forces a recreation of the secondary elasticache cluster on every apply

Steps to Reproduce

  1. terraform apply

When i provided the encryption attributes the first time i created this secondary cluster, terraform complained about these attributes conflicting with the global_replication_group_id attribute. But the plan doesn't even show this attribute. It's as if it's treating it as a standalone replication group

@ghost ghost added the service/elasticache Issues and PRs that pertain to the elasticache service. label Mar 12, 2021
@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Mar 12, 2021
@anGie44 anGie44 added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Mar 12, 2021
@anGie44
Copy link
Contributor

anGie44 commented Mar 12, 2021

Hi @OmarSalka, thank you for raising this issue and apologies you came across this apply time error. Making an initial pass, it seems the transit_encryption_enabled argument is Computed for the secondary replication group, so we'll need to account for that if possible in the schema, perhaps remove the ConflictsWith condition to allow users to remedy the diff in a terraform config, and/or update how we're reading back the replication group resource. More investigation to come. To workaround the diff you are seeing, did you try using something like:

lifecycle {
    ignore_changes = [
      transit_encryption_enabled,
    ]
  }

?

Schema reference:

"global_replication_group_id": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
Computed: true,
ConflictsWith: []string{
"automatic_failover_enabled",
"cluster_mode", // should/will be "num_node_groups"
"parameter_group_name",
"engine",
"engine_version",
"node_type",
"security_group_names",
"transit_encryption_enabled",
"at_rest_encryption_enabled",
"snapshot_arns",
"snapshot_name",
},
},

Relates: #17725

@mrobinsn
Copy link

@OmarSalka I just ran into this same issue and had to add all of these things into the lifecycle block on the replica to get a clean re-apply:

lifecycle {
  ignore_changes = [engine_version, at_rest_encryption_enabled, automatic_failover_enabled, transit_encryption_enabled]
}

@OmarSalka
Copy link
Author

Sorry for the late reply @anGie44 @mrobinsn , yes that works for now. Appreciate your help!

@ghost
Copy link

ghost commented Mar 26, 2021

This has been released in version 3.34.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

@ghost
Copy link

ghost commented Apr 24, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Apr 24, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.
Projects
None yet
4 participants