-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubecfg overwritten in kops 1.19, without user flags specified #11021
Comments
specifically, I'm annoyed that 1.19 still updates the kubecfg, even when the release notes explicitly call out that that doesn't happen anymore by default ("kOps will no longer automatically export the kubernetes config on kops update cluster") |
I confirmed this just now as well. Seems to be a result of |
Thanks for the patience. We discussed about this and @justinsb will look into it soon. |
Sorry about the behaviour here - we're trying to balance setting the current context so that you don't have to specify it every time (the UX) while also being more secure about not always exporting admin credentials. There is a flag I'm am looking into this; I can certainly clarify the docs to specify that it's the user config that we won't overwrite or export by default. I'm worried that if we don't overwrite the server config that it will cause a different class of problems for users when their configuration changes and they forget to export. In addition, I'm looking to clean up the code, but I don't think that it's particularly easy to change the behaviour here. My 2c is that we should work towards making it that you don't have to edit the kubeconfig - i.e. exporting the internal API address by default (so it would be good to know if the --internal flag works) and also to a secure configuration using the auth plugin, so we can once again configure admin credentials by default. |
Make a clearer distinction between exporting kubeconfig (including server endpoints / certificates) vs exporting credentials. Issue kubernetes#11021
I was curious if maybe we had a bug with
I checked and it seemed to work with |
Hmm, yes, using But, regardless, the behaviour of overwriting existing cluster configurations is still unexpected, and I'd like it if it didn't do that unless explicitly asked to do that. |
@MMeent thanks for confirming. We do want to export the kubecfg when we're first creating the cluster; we do also want to export it if the endpoint has changed (e.g. if the cluster switches from DNS to a load balancer, although I'm not sure this is actually something that can be done!). We do have the One thing I'd like us to do more of is use our kops configuration file (~/.kops/config); currently that's limited to basically just configuring kops_state_store, but we could make create_kube_config configurable there. We'd probably also have to have different options for create vs update, but that seems doable... |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
1.19.0
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.1.19
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster --yes
5. What happened after the commands executed?
6. What did you expect to happen?
The kubecfg shouldn't have been updated. E.g. not
**7. Please provide your cluster manifest.
not applicable
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
I'm manually setting the server of the kubecfg to the internal name (api.internal.cluster-name). The API is only accessable for internal IPs, so defaulting to the public ips is annoying and this configuration being overwritten each time (whilst release notes say that wouldn't happen anymore unless specifically asked) is also a chore to revert time and time again.
The text was updated successfully, but these errors were encountered: