Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding an additional geo_location to an azurerm_cosmosdb_account should not require replacement #3532

Closed
lukehoban opened this issue May 27, 2019 · 7 comments · Fixed by #7217

Comments

@lukehoban
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

$ terraform -v
Terraform v0.12.0
+ provider.azurerm v1.28.0
+ provider.random v2.1.2

Affected Resource(s)

  • azurerm_cosmosdb_account

Terraform Configuration Files

resource "azurerm_resource_group" "rg" {
  name     = "rg123"
  location = "WestUS"
}

resource "random_integer" "ri" {
  min = 10000
  max = 99999
}

resource "azurerm_cosmosdb_account" "db" {
  name                = "tfex-cosmos-db-${random_integer.ri.result}"
  kind                = "MongoDB"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  location            = "${azurerm_resource_group.rg.location}"
  consistency_policy {
    consistency_level       = "BoundedStaleness"
    max_interval_in_seconds = 10
    max_staleness_prefix    = 200
  }
  offer_type          = "Standard"
  enable_automatic_failover = true
  geo_location {
    location          = "WestUS"
    failover_priority = 0
  }
#  geo_location {
#    location          = "EastUS"
#    failover_priority = 1
#  }
}

Expected Behavior

After deploying the code above, uncommenting the second geo_location and rerunning apply should have updated the existing CosmosDB database.

Actual Behavior

Instead, it requires a replacement.

  # azurerm_cosmosdb_account.db must be replaced
-/+ resource "azurerm_cosmosdb_account" "db" {
      ~ connection_strings                = (sensitive value)
        enable_automatic_failover         = true
        enable_multiple_write_locations   = false
      ~ endpoint                          = "https://tfex-cosmos-db-95504.documents.azure.com:443/" -> (known after apply)
      ~ id                                = "/subscriptions/8bafcca2-660a-4459-a503-b785cf317a3a/resourceGroups/rg123/providers/Microsoft.DocumentDB/databaseAccounts/tfex-cosmos-db-95504" -> (known after apply)
        is_virtual_network_filter_enabled = false
        kind                              = "MongoDB"
        location                          = "westus"
        name                              = "tfex-cosmos-db-95504"
        offer_type                        = "Standard"
      ~ primary_master_key                = (sensitive value)
      ~ primary_readonly_master_key       = (sensitive value)
      ~ read_endpoints                    = [
          - "https://tfex-cosmos-db-95504-westus.documents.azure.com:443/",
        ] -> (known after apply)
        resource_group_name               = "rg123"
      ~ secondary_master_key              = (sensitive value)
      ~ secondary_readonly_master_key     = (sensitive value)
      ~ tags                              = {} -> (known after apply)
      ~ write_endpoints                   = [
          - "https://tfex-cosmos-db-95504-westus.documents.azure.com:443/",
        ] -> (known after apply)

        consistency_policy {
            consistency_level       = "BoundedStaleness"
            max_interval_in_seconds = 10
            max_staleness_prefix    = 200
        }

      - geo_location { # forces replacement
          - failover_priority = 0 -> null
          - id                = "tfex-cosmos-db-95504-westus" -> null
          - location          = "westus" -> null
        }
      + geo_location { # forces replacement
          + failover_priority = 0
          + id                = (known after apply)
          + location          = "westus"
        }
      + geo_location { # forces replacement
          + failover_priority = 1
          + id                = (known after apply)
          + location          = "eastus"
        }
    }

Steps to Reproduce

  1. terraform apply
  2. Uncomment the second geo_location
  3. terraform apply again

Notes

  • Additional geo locations can be added to a CosmosDB account without downtime or replacement using the API or console.
@D4zedC0der
Copy link

So I have a similar scenario where I haven't added a new geo_location. We already had 2. Our plan output looks like more this:

      - geo_location { # forces replacement
          - failover_priority = 0 -> null
          - id                = "aca-pre-neu-csvi-geoloc2-1323" -> null
          - location          = "westeurope" -> null
          - prefix             = "aca-pre-neu-csvi-geoloc2-1323" -> null
        }
      + geo_location { # forces replacement
          + failover_priority = 0
          + id                = (known after apply)
          + location          = "westeurope"
          + prefix             = "aca-pre-neu-csvi-geoloc2-1323"
        }
      - geo_location { # forces replacement
          - failover_priority = 1 -> null
          - id                = "aca-pre-neu-csvi-1323-northeurope" -> null
          - location          = "northeurope" -> null
        }
      + geo_location { # forces replacement
          + failover_priority = 1
          + id                = (known after apply)
          + location          = "northeurope"
          + prefix             = "aca-pre-neu-csvi-1323-northeurope"
        }

Now the first geoLoc stanza to us looks identical so I don't understand why it needs to -/+ at all

The second one didn't have a "prefix" in it previously, but I believe in adding a prefix which matches - it should be the same right? Yet it is still trying to -/+. We have modules and multiple environments, so we are struggling to have a prefix for the first one but not for the second without at least destroying the CosmosDB in at least ONE of our environments.

Ultimately my question is - Would any of the referenced PR's resolve this?

If not - and if this is indeed caused by the prefix addition to the second geo_location stanza - is there any way to set the prefix on the second geo_location stanza without causing a -/+ ?

@eliasvakkuri
Copy link

I have a similar issue. In my case I tried using the dynamic functionality to allow a variable number of geo_location blocks e.g. per environment, but this appears to require a full replace of the CosmosDB account on each apply. So not usable currently.

@georgegil
Copy link

Having the same issue here.

Changing geo settings replaces the entire db

@matthawley
Copy link

Has anyone looked into this? This becomes a highly destructive action.

@jonhoare
Copy link

I am also having this issue today. I currently only have a single geo_location with priority 0. I am now trying to add a second region for geo_location, without updating the primary location which is already defined and it is wanting to completely destroy my primary location to recreate it, whilst adding the secondary location.

@ghost
Copy link

ghost commented Jun 11, 2020

This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.14.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Jul 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jul 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.