Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] resource "ovh_dedicated_nasha_partition_snapshot" created at every apply #768

Closed
schirka opened this issue Nov 11, 2024 · 2 comments
Closed

Comments

@schirka
Copy link

schirka commented Nov 11, 2024

Describe the bug

the terraform resource "ovh_dedicated_nasha_partition_snapshot" are correctly applied on the partition, but the provider is creating then again at every apply

Terraform Version

Terraform v1.5.7

OVH Terraform Provider Version

provider registry.terraform.io/ovh/ovh v0.33.0

Affected Resource(s)

ovh_dedicated_nasha_partition_snapshot

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

terraform {
  required_providers {
    ovh = {
      source = "ovh/ovh"
      version = "0.33.0"
    }
  }
}

snapshot_loop = distinct(flatten([
    for partition in var.partitions : [
      for frequency in partition.snapshots: {
        type = frequency
        name = partition.name
      }
    ]
  ]))
}

resource "ovh_dedicated_nasha_partition" "partition" {
  for_each = var.partitions
  service_name = var.service_name
  name = each.value.name
  size = each.value.size
  protocol = each.value.protocol
}

resource "ovh_dedicated_nasha_partition_snapshot" "snapshot-policy" {
  for_each = { for snapshot in local.snapshot_loop : "${snapshot.name}-${snapshot.type}" => snapshot }
  service_name = var.service_name
  partition_name = each.value.name
  type = each.value.type
  depends_on = [
    ovh_dedicated_nasha_partition.partition
  ]
}

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?
Snapshot configuration should be applied only once or when changed

Actual Behavior

What actually happened?

Snapshot configuration is applied again and again at every tf apply

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

Additional context

Add any other context about the problem here.

@amstuta
Copy link
Contributor

amstuta commented Nov 13, 2024

Hello @schirka,

Could you provide us a small reproducer that doesn't include loops nor variables ? So we can test it on our side with the same configuration.

When using a simple configuration like the following, I have no issue of re-creation:

resource "ovh_dedicated_nasha_partition" "partition" {
  service_name = "zpool-****"
  name = "test-partition"
  size = 20
  protocol = "NFS"
}

resource "ovh_dedicated_nasha_partition_snapshot" "partition-snap" {
  service_name = "zpool-*****"
  partition_name = ovh_dedicated_nasha_partition.partition.name
  type = "day-3"
}

@amstuta
Copy link
Contributor

amstuta commented Jan 6, 2025

Closing this issue because it is inactive, don't hesitate to reopen it with new information if needed.

@amstuta amstuta closed this as completed Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants