Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API errors from applying "valid" container_cluster resource #2119

Closed
wyardley opened this issue Sep 26, 2018 · 3 comments
Closed

API errors from applying "valid" container_cluster resource #2119

wyardley opened this issue Sep 26, 2018 · 3 comments
Labels
bug forward/review In review; remove label to forward service/container

Comments

@wyardley
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

Terraform v0.11.8

Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Terraform Configuration Files

These configs pass validation, but create errors on deploy. This one:

resource "google_container_cluster" "this" {
  name               = "${var.name}"
  zone               = "${var.zone}"
  initial_node_count = "${var.min_node_count}"

  enable_legacy_abac = "true"
  node_version       = "${var.node_version}"
  min_master_version = "${var.min_master_version}"
  monitoring_service = "monitoring.googleapis.com"

  node_config {
    service_account = "${element(split("/", google_service_account.this.name), 3)}"
    machine_type    = "${var.machine_type}"
    oauth_scopes    = "${var.scopes}"
  }
  node_pool {
    autoscaling {
      min_node_count = "${var.min_node_count}"
      max_node_count = "${var.max_node_count}"
    }
  }
}
resource "google_container_cluster" "this" {
  name               = "${var.name}"
  zone               = "${var.zone}"
  initial_node_count = "${var.min_node_count}"

  enable_legacy_abac = "true"
  node_version       = "${var.node_version}"
  min_master_version = "${var.min_master_version}"
  monitoring_service = "monitoring.googleapis.com"

  node_pool {
    autoscaling {
      min_node_count = "${var.min_node_count}"
      max_node_count = "${var.max_node_count}"
    }
    node_config {
      service_account = "${element(split("/", google_service_account.this.name), 3)}"
      machine_type    = "${var.machine_type}"
      oauth_scopes    = "${var.scopes}"
    }
  }
}

both pass validation and plan but then error (see below) on running.

We were able to fix by moving initial_node_count down to the node_pool block.

Expected Behavior

Terraform should have generated an error when validating or running the plan.

Actual Behavior

We got an API error bubbling up. When node_pool and node_config are at the same level within the cluster config:

* google_container_cluster.this: googleapi: Error 400: It's invalid to specify both cluster.node_config and a node pool. Please only provide a node pool., badRequest

when we moved node_config inside the node_pool block:

• google_container_cluster.this: googleapi: Error 400: It's invalid to specify both cluster.initial_node_count and a node pool. Please only provide a node pool., badRequest

Steps to Reproduce

  1. terraform apply

Important Factoids

In this case, authenticating as a user.

References

@ghost ghost added the bug label Sep 26, 2018
@paddycarver
Copy link
Contributor

I think a bit of validation / ConflictsWith would probably resolve these.

@rileykarson
Copy link
Collaborator

While we should have added this validation, based on my experience with similar fields in this resource it would count as a breaking change to make now. It's possible that users set node_config , set lifecycle.ignore_changes on it, and then set node_pool with the default pool's node_config nested inside.

I'm hoping to rework the google_container_cluster resource to avoid this scenario in 3.0.0, so this experience should be better when it's possible!

@ghost
Copy link

ghost commented Jul 26, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jul 26, 2019
@github-actions github-actions bot added service/container forward/review In review; remove label to forward labels Jan 15, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug forward/review In review; remove label to forward service/container
Projects
None yet
Development

No branches or pull requests

3 participants