Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating subproperties after cluster creation is prohibited by kops cli #746

Closed
shrugs opened this issue Oct 28, 2016 · 8 comments
Closed
Milestone

Comments

@shrugs
Copy link
Contributor

shrugs commented Oct 28, 2016

kops doesn't accept adding the runtimeConfig with an existing cluster because the definition in config differs from cluster.spec (i.e., it thinks we're trying to set CloudProvider to "", and rejects the change).

The solution to get around this is to download config.spec and copy over the kubeAPIServer block over to config in full (and then add runtimeConfig).

After that I believe you can rolling-update as expected (untested).

@chrislovecnm
Copy link
Contributor

@shrugs so we are missing pieces to the runtimeConfig section upon initial creation?

@chrislovecnm chrislovecnm changed the title Updating subproperties after cluste creation is prohibited by kops cli Updating subproperties after cluster creation is prohibited by kops cli Oct 29, 2016
@chrislovecnm chrislovecnm changed the title Updating subproperties after cluster creation is prohibited by kops cli Updating subproperties after cluster creation is prohibited by kops cli Oct 29, 2016
@shrugs
Copy link
Contributor Author

shrugs commented Oct 30, 2016

@chrislovecnm I haven't looked at the code just yet, so this may be off, but the issue seems to be that when kops receives the updated config from kops edit cluster :cluster, it attempts to diff the changes with the existing cluster.spec. The process that diffs the old config with the new config seems to take non-specified values and coerce them into empty string. Then the process that diffs the new config and cluster.spec to determine which changes need to be applied to aws sees that we're attempting to change CloudProvider from "aws" to "", and denies all of the changes.

The workaround I posted above works because we're then trying to set CloudProvider from aws to aws, which is detected as no change, and passes the check.

I may have time later in the week to checkout the code and test a fix, but I don't want to make any guarantees.

@chrislovecnm
Copy link
Contributor

@justinsb thoughts?

@brunoalano
Copy link

The solution by @shrugs works, I tested because I had the same error

@chrislovecnm
Copy link
Contributor

We have PR inbound thay may help with this ... I am looking through issues to test against #1183 - no promises but we need to test.

@justinsb justinsb modified the milestone: 1.5.0 Dec 28, 2016
@elblivion
Copy link
Contributor

The workaround worked for me, but only after a rolling-update --force --yes as described in #618.

@shrugs
Copy link
Contributor Author

shrugs commented Feb 20, 2017

I can confirm that this is no longer an issue on master, so I'm closing. In summary, after adding the additional properties (note the quotes around "true")

  kubeAPIServer:
    runtimeConfig:
      "batch/v2alpha1": "true"

you must do a forced rolling update (kops rolling-update cluster {cluster} --force --yes), since kops doesn't yet have a way to determine whether or not the process on the master is running with the correct arguments.

@shrugs shrugs closed this as completed Feb 20, 2017
@snoby
Copy link
Contributor

snoby commented Jun 14, 2017

I can say that a fresh cluster version 1.5.4 then applying these changes the masters never come back online again after the rolling-update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants