Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_horizontal_pod_autoscaler: scaling based on cpu and memory produces updates each run #28564

Closed
hydrapolic opened this issue Apr 30, 2021 · 4 comments
Labels
bug new new issue not yet triaged waiting for reproduction unable to reproduce issue without further information

Comments

@hydrapolic
Copy link

Terraform Version

0.14.10 and 0.15.1

Terraform Configuration Files

...
resource "kubernetes_horizontal_pod_autoscaler" "hpa" {
  metadata {
    name = "demo"
  }
  spec {
    min_replicas = 2
    max_replicas = 30
    scale_target_ref {
      api_version = "apps/v1"
      kind = "Deployment"
      name = "demo"
    }
    metric {   
      type = "Resource"
      resource {
        name = "cpu"
        target {
          type = "Utilization"
          average_utilization = "80"
        }
      }
    }
    metric {
      type = "Resource"
      resource {
        name = "memory"
        target {
          type = "Utilization"
          average_utilization = "80"
        }
      }
    }
  }
}

Debug Output

Crash Output

Expected Behavior

HPA should be applied only once.

Actual Behavior

Each time terraform apply is run a diff is shown.

Steps to Reproduce

  1. terraform init
  2. terraform apply
  # module.demo.kubernetes_horizontal_pod_autoscaler.hpa will be created
  + resource "kubernetes_horizontal_pod_autoscaler" "hpa" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "demo"
          + namespace        = "default"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }

      + spec {
          + max_replicas                      = 30
          + min_replicas                      = 2
          + target_cpu_utilization_percentage = (known after apply)

          + metric {
              + type = "Resource"

              + resource {
                  + name = "cpu"

                  + target {
                      + average_utilization = 80
                      + type                = "Utilization"
                    }
                }
            }
          + metric {
              + type = "Resource"

              + resource {
                  + name = "memory"

                  + target {
                      + average_utilization = 80
                      + type                = "Utilization"
                    }
                }
            }

          + scale_target_ref {
              + api_version = "apps/v1"
              + kind        = "Deployment"
              + name        = "demo"
            }
        }
    }
  1. terraform apply
  # module.demo.kubernetes_horizontal_pod_autoscaler.hpa will be updated in-place
  ~ resource "kubernetes_horizontal_pod_autoscaler" "hpa" {
        id = "default/demo"


      ~ spec {
            # (3 unchanged attributes hidden)

          ~ metric {
                # (1 unchanged attribute hidden)

              ~ resource {
                  ~ name = "memory" -> "cpu"

                    # (1 unchanged block hidden)
                }
            }
          ~ metric {
                # (1 unchanged attribute hidden)

              ~ resource {
                  ~ name = "cpu" -> "memory"

                    # (1 unchanged block hidden)
                }
            }

            # (1 unchanged block hidden)
        }
        # (1 unchanged block hidden)
    }

Additional Context

When only using cpu autoscaling, once applied, no changes are printed by running terraform apply. However when both
cpu and memory autoscaling is defined a diff is printed each time terraform apply is run swapping cpu for memory and vice
versa.

References

@hydrapolic hydrapolic added bug new new issue not yet triaged labels Apr 30, 2021
@jbardin
Copy link
Member

jbardin commented Apr 30, 2021

Hi @hydrapolic,

Thanks for filing the issue. I have a feeling that the provider is changing some combination of values not shown in the diff which is triggering the output you see here. In order to work with legacy providers, terraform will hide some trivial changes that normally don't effect the result, like empty values being swapped for null values.

You may find more clues looking at the logs. I suspect you will find warnings about the provider producing an inconsistent plan for this resource.

Is there an easy way to setup a standalone reproduction of this configuration so we can troubleshoot it?

Thanks!

@jbardin jbardin added waiting for reproduction unable to reproduce issue without further information waiting-response An issue/pull request is waiting for a response from the community labels Apr 30, 2021
@jrhouston
Copy link
Contributor

It looks like this is actually a provider bug – duplicate issue here: hashicorp/terraform-provider-kubernetes#1188

@hydrapolic
Copy link
Author

Thanks @jrhouston for the linked bug, it's exactly the same issue.

@ghost ghost removed waiting-response An issue/pull request is waiting for a response from the community labels May 5, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jun 5, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 5, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug new new issue not yet triaged waiting for reproduction unable to reproduce issue without further information
Projects
None yet
Development

No branches or pull requests

3 participants