-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid count argument #690
Comments
Hi @tvvignesh |
@bharathkkb Hi. Running in TF version v0.13.2 and GKE 1.18.6-gke.4801 an latest version of this module. |
@tvvignesh could you provide your config, I can try to reproduce |
@bharathkkb Sure. This would be the relevant portion of the config. Kindly replace the vars where necessary. module "global_gke" {
source = "../modules/safer-cluster-update-variant"
description = "My Cluster"
project_id = module.global_enabled_google_apis.project_id
name = var.global_cluster_name
region = var.global_region
network = module.global_vpc.network_name
subnetwork = module.global_vpc.subnets_names[0]
horizontal_pod_autoscaling = true
enable_vertical_pod_autoscaling = true
enable_pod_security_policy = true
http_load_balancing = true
gce_pd_csi_driver = true
monitoring_service = "none"
logging_service = "none"
release_channel = "RAPID"
enable_shielded_nodes = true
ip_range_pods = module.global_vpc.subnets_secondary_ranges[0].*.range_name[0]
ip_range_services = module.global_vpc.subnets_secondary_ranges[0].*.range_name[1]
master_authorized_networks = [{
cidr_block = "${module.global_bastion.ip_address}/32"
display_name = "Global Bastion Host"
}]
grant_registry_access = true
node_pools = [
{
name = "global-pool-1"
machine_type = "n1-standard-4"
min_count = 1
max_count = 20
local_ssd_count = 0
disk_size_gb = 30
disk_type = "pd-ssd"
image_type = "UBUNTU_CONTAINERD"
auto_repair = true
auto_upgrade = true
node_metadata = "GKE_METADATA_SERVER"
service_account = "${var.global_sa}"
preemptible = false
}
]
} |
Having the exact same issue as well. Seems to only happen when you've made an error, and once it gets in this state you can't |
@halkyon What was the error you made? Reproducing this will likely require us to see your broken config. |
@morgante Here you go: https://github.com/halkyon/gke-beta-private-cluster-example Using Terraform v0.13.4. Change the values in
Hope this helps! |
Exact same issue here. |
I was able to reproduce this with 0.13.4; seems like after the node pool config errors out, TF is unable to resolve Works as intended with 0.12.29. |
Any updates , this happens to me to with 13.4 and after upgrading the node pool |
what's up with this? if the module fail and its easy to replicate if you put invalid machine type say for example can you please fix this? |
Since this is working in Terraform 0.12.x but not in 0.13.x I'm inclined to believe this is a Terraform Core issue. We can attempt to workaround it but it's not a high priority when Core should be fixing it. |
I was able to create a light repro which works with 0.12.x and not with 0.13.4. I will open an issue in core. |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days |
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days |
I'm getting this issue with terraform:0.14.7, during tf plan phase:
Any suggestions on the workaround? |
@AlexBulankou Is this for a fresh deploy? What does your module configuration look like? |
Yes, this is a fresh deploy: module confg. |
To follow-up, the workaround for me was to back to |
Hi. I tried setting up gke private cluster (safer-cluster-update-variant) and whenever I make any errors (accidentaly giving the wrong image name or machine type and so on), the apply fails (not detected in plan) which is understandable.
But, if I try fixing the issue, and run plan and apply again, I get this:
It has been discussed here:
hashicorp/terraform#21450
hashicorp/terraform#12570
but I am not able to understand how to get over this.
I do understand that it is happening because it is not able to find any node pool in the cluster for which it can determine the count. If I go to
.terraform/modules/global_gke.gke.gcloud_wait_for_cluster/main.tf
I can see this line which is where the issue is.Currently what I am doing is deleting the cluster every time and re-creating it from scratch. May I know how I can avoid doing that and just fix this issue? Thanks.
The text was updated successfully, but these errors were encountered: