This repository has been archived by the owner on Mar 29, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 282
gke cluster module isn't idempotent #59
Comments
I tried 2 node_pools, just to see what would happen, and it still forces a destroy -> create with every run. I'd love to know how to make it so that the module is stable from run to run. |
Do you have |
Nvmd opened #60 with the fix. |
This should be fixed in https://github.com/gruntwork-io/terraform-google-gke/releases/tag/v0.3.5 |
Awesome. Thanks. So fast, I didn't even see the response before it was closed! |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have a template that calls the gke-cluster module. No matter what I put for variables in the template, every time I run terragrunt apply, it tells me it needs to destroy the old cluster and create a new one.
It seems that, if the plan is to be believed, that the tags on the default node pool must
be explicitly set to the same values as the tags in non-default node pool, so that subsequent runs don't require updates. I haven't tested it via a fork yet.
The module forced a new cluster when I specified no node_pool outside of the module AND when I specified a node_pool outside the module. I haven't tried a node_pool with no tags, because the network won't work for that, since it needs a 'private' tag, at minimum.
The text was updated successfully, but these errors were encountered: