Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alias kops replace to kops apply to match kubectl #2616

Closed
ghost opened this issue May 21, 2017 · 7 comments
Closed

Alias kops replace to kops apply to match kubectl #2616

ghost opened this issue May 21, 2017 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ghost
Copy link

ghost commented May 21, 2017

As discussed here.

The idea is to change kops replace to kops apply, so it matches kubectl and its semantics (you don't replace a cluster, you apply a new desired state to it, which may or may not involve replacing parts of the cluster or the whole cluster).

Practically we'd have to copy the command definition code for the replace command and change replace to apply, and then perhaps add a deprecation warning to the replace command, AFAICT that's all there is to it.

@ghost
Copy link
Author

ghost commented May 24, 2017

I've implemented this suggestion in the linked PR, if anyone could comment on it that would be great.

@justinsb
Copy link
Member

justinsb commented Jun 9, 2017

So I'm working on "kops server" mode, where we will be able to run kops as a kubernetes apiserver, and you'll be able to use kubectl apply.

My concern therefore is that we might be increasing the mental overhead, if kops apply is subtly different from kubectl apply. kubectl apply applies immediately, which is certainly going to be one big different (and feeds into the --yes discussion happening on other issues).

So I agree this would be good, but I ask that we wait until we have kubectl apply and can compare, to make sure we aren't painting ourselves into a corner. (And yes, I know replace already exists as well.. :-) )

@ghost
Copy link
Author

ghost commented Jun 9, 2017 via email

@justinsb
Copy link
Member

justinsb commented Jun 9, 2017

You're right, but let me provide a bit more of a description!

Kubernetes has api-machinery which does object versioning etc. We've used that machinery for a long time in kops - it is why the kops YAML files "look like" k8s objects.

Over the past few months the kubernetes project has refactored out the kubernetes api server logic, such that it is now practical to run an apiserver with your own API objects. It would work with kubectl. So we would have a kubernetes apiserver that only spoke Cluster & InstanceGroup etc. That's what I'm calling the "kops server".

In 1.7 there should be early support for aggregating API servers, so the kops server would appear in the "main" kubernetes API. So you would be able to edit your instancegroups from the main kubectl.

I don't see this replacing the current mode of operation (where there is no server). If nothing else, it makes bootstrapping hard! Also, I like the lightweight CLI / S3 mode of operation.

But for teams, it might be handy to have a standalone kops server, rather than sharing an S3 bucket. And some people might prefer it (and we could probably create an AMI that boots up into kops server, to avoid the bootstrapping problem).

There are challenges here: the current mode of operation of k8s is quite different from kops in that when you kubectl apply something it happens right away, whereas with kops there is a separate "confirm" step (kops update cluster). I think that confirmation step would have to go away in kops-server mode, that's going to be tricky, and that's why I'm a little wary of adding to the complexity right now.

But this is good, and I want to do it, but it's one of those where I'm like "uh oh, this could make other work much harder" so my opinion is that we should wait to see how that other work goes. But that is predicated on this PR being a "nice to have" rather than critical functionality. (And it is nice to have, but I hope this makes sense!)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 25, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants