Skip to content
This repository has been archived by the owner on Jul 27, 2023. It is now read-only.

Required variable kube_worker_ips is not set #1329

Closed
joe1chen opened this issue Apr 6, 2016 · 7 comments
Closed

Required variable kube_worker_ips is not set #1329

joe1chen opened this issue Apr 6, 2016 · 7 comments
Milestone

Comments

@joe1chen
Copy link

joe1chen commented Apr 6, 2016

  • Ansible version (ansible --version): 1.9.5
  • Python version (python --version): 2.7.10
  • Git commit hash or branch: 6185f11
  • Cloud Environment: AWS and Cloudflare for DNS
  • Terraform version (terraform version): 0.6.14

I had mantl up and running on master with the revision just before the kubernetes changes were merged in. Now, when trying to run terraform applyI'm getting the error:

* module root: module dns: required variable kube_worker_ips not set

It looks like the terraform/cloudflare/main.tf now requires a kube_worker_ips to be passed in even if I don't have any kubernetes workers. Also, it will try to create cloudflare records for dns-kube-worker if I pass in something for kube_worker_ips.

Maybe this needs to be refactored out of the general Cloudflare DNS terraform. There should be a k8-main.tf or something that you could include if you want DNS settings for kubernetes.

@langston-barrett
Copy link
Contributor

cc @BrianHicks @Zogg

@langston-barrett
Copy link
Contributor

@joe1chen We're turning on Kubernetes by default soon, so that's why those are in the default Terraform configs. That variable shouldn't be required if you have an K8S workers, though, this is probably a bug.

@joe1chen
Copy link
Author

joe1chen commented Apr 6, 2016

I guess currently it is a bug then since the kube_worker_ips does not have a default value defined. However, even if I added a { default = "" } to get around the error, terraform is going to try to create a dns-kube-worker record with a blank value then. Not sure how cloudflare_record would handle the blank string.

@BrianHicks
Copy link
Contributor

Sorry you've been bitten by this @joe1chen, we try to keep backwards compatibility but looks like we haven't this time. @siddharthist is right that it's a bug. You should be able to set the variable to an empty string to avoid creating records. We'll try and set this (or something else reasonable) by default in 1.1.

@BrianHicks BrianHicks added this to the 1.1 milestone Apr 6, 2016
@BrianHicks
Copy link
Contributor

ah ha, we were commenting at the same time. If kube_worker_count is set to 0, it shouldn't create any records.

@joe1chen
Copy link
Author

joe1chen commented Apr 6, 2016

Got it, I see now the kube_worker_count should prevent it from being created. Thanks for the quick responses. BTW, we're looking forward to testing out the Kubernetes integration so keep up the good work.

@BrianHicks
Copy link
Contributor

Well your timing is just right, I could sure use someone to test out PR #1330 if you're interested. There be dragons there, though, and you may want to wait if you're wanting a production K8s cluster from that branch.

BrianHicks added a commit that referenced this issue Apr 6, 2016
If not set, this causes Terraform to error. This is a problem for those
coming from 1.0.x Terraform configurations.

Fixes #1329
@BrianHicks BrianHicks self-assigned this Apr 6, 2016
langston-barrett pushed a commit that referenced this issue Apr 6, 2016
If not set, this causes Terraform to error. This is a problem for those

coming from 1.0.x Terraform configurations.



Fixes #1329
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants