Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Environment variable to control exported user #8017

Closed
itmecho opened this issue Nov 27, 2019 · 7 comments · Fixed by #9280
Closed

Environment variable to control exported user #8017

itmecho opened this issue Nov 27, 2019 · 7 comments · Fixed by #9280

Comments

@itmecho
Copy link

itmecho commented Nov 27, 2019

It would be great to have an environment variable to be able to configure the user kops exports.

We have RBAC setup with Google OIDC for auditing purposes and when we run commands like kops replace or kops export kubecfg, it overwrites the current kube config context with the default certificate based admin user. Then any commands we run with kubectl show up as the admin user rather than the individual users.

In this case, it would be great to have kops not add the admin user to the kubecfg file and just use the value from the environment variable in the context block. The environment variable could be something like KOPS_KUBECFG_USER

@rifelpet
Copy link
Member

I like the idea of having kops export kubecfg optionally not set the user, we run into the same issue.

Once we decide on exactly how it should be implemented (env var, cli flag, etc.) I think this would be a good beginner issue if anyone would like to take it on.

The majority of the changes will be in these two files:

https://github.com/kubernetes/kops/blob/master/cmd/kops/export_kubecfg.go
https://github.com/kubernetes/kops/blob/master/pkg/kubeconfig/create_kubecfg.go

Kops names the cluster, context, and user after the cluster name. In our case we overwrite the user of the same name, so kops just clobbers our user definition. but it sounds like you create an additional user and want the context's user to not be reverted. If we can come up with a flexible way of implementing this that could handle both of these scenarios that would be great.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 26, 2020
@itmecho
Copy link
Author

itmecho commented Feb 28, 2020

/remove-lifecycle stale

I think this is still a valid feature request

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 28, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2020
@olemarkus
Copy link
Member

/remove-lifecycle stale

This is an issue for us as well.
I think using the admin user is something we should generally discourage. Any command that now exports that user should rather have an --export-admin-user flag that explicitly exports that user. In addition we can add --context-user or similar for setting a pre-existing user to the context that kops creates. If neither flag is provided when the context is created, fail with a helpful message. We can also add some OIDC configuration to our docs (in our case Azure AD).

@rifelpet maybe something to bring up in office hours?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2020
@johngmyers
Copy link
Member

Our wrapper around kops adds a flag that strips the admin credentials, replacing them with the configuration to invoke our authentication hook.

I think it would be better to only include the admin credentials when given an explicit flag requesting such. I'm not quite sure how to get the information to configure a presumably site-specific authentication hook.

@olemarkus
Copy link
Member

#9280 illustrates what I'd like to see. I just hacked it in now, and didn't bother with tests. But you get the idea.

  • If context doesn't exist, user have to add --admin or --user <existing user>
  • If context exists and neither flag is specified, don't change the user bit of the context.

I am not sure how your authentication hook works, but ours work by using exec plugin. So kubectl calls our hook with a set of params and returns the jwt used towards our OIDC provider.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants