Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl plan proposal #1142

Closed
wants to merge 2 commits into from
Closed

Conversation

pwittrock
Copy link
Member

No description provided.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 2, 2017
@pwittrock
Copy link
Member Author

cc @apelisse

@pwittrock
Copy link
Member Author

cc @liggitt

@bgrant0607
Copy link
Member

cc @kubernetes/sig-cli-proposals

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. kind/design Categorizes issue or PR as related to design. labels Oct 5, 2017

## TL;DR

Introduce a new `plan` command similar to `kubectl create --dry-run -o yaml`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan of a new command group.

Please see my comment here:
kubernetes/kubernetes#30558 (comment)

I am in favor of simplifying the flags and cleaning up the output.

Also, I don't find the term plan to be intuitive. Is that borrowed from Terraform?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it was suggested by @jbeda for that reason. I am not tied to the name.

I am interested to get your feedback on adding support for implementing the create subcommands as sub resources on the server side, and dynamically discovering them and their arguments through the discovery service + openapi extensions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re. create: I've thought about that in the past:
kubernetes/kubernetes#5280

I have significant reservations, though. We've added a number of options to these generations. As with template parameterization, there is a slippery slope where we just develop an alternative API schema that's almost as complex but different than the current one.

My instinct is to find another way to do this.

What's your main reason for wanting a server-side implementation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's your main reason for wanting a server-side implementation?

There are 2 problems I would like to solve and think dynamically discovered server-side commands would provide a solution:

  1. Tie the resource-specific commands to the resource APIs actually available in the cluster.
  • In a versioned skewed environment, as a kubectl user, I would like kubectl to show and support the resource specific commands/subcommands for the APIs present in the cluster, rather than those compiled into the client.
  • When using extension APIs, as a kubectl user, I would like to be able to view and use resource specific commands/subcommands of the extension APIs in the cluster without separately installing a compiled plugin (e.g. simplifies questions such as: what if the extension is in some cluster but not others? or what if my corporate IT policy doesn't allow me to run arbitrary binaries on my computer?).
  1. Allow end-to-end development of APIs to be performed without changing kubectl code for things like the create resource name

Currently, adding a new API to Kubernetes results in new specific kubectl commands commands added for that resource. The code for these commands lives in the kubectl code base, but the ownership of their maintenance is unclear (is it the maintainer of the API code, or the maintainer of the kubectl codebase). We frequently discover issues in kubectl API-specific commands late into the release cycle, and resolving these issues generally requires a fire drill from both an API maintainer and an kubectl maintainer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Version skew, extensibility, and maintainability -- reasonable objectives.

Quick thought: How would users do this for CRDs?

create configmap and secret are somewhat special, since they transform non-API data in support of particular idioms/patterns. We do need an API-like representation for the declarative manifest work, but there's still client-side logic that is required, such as reading and combining multiple files.

run and expose are also somewhat special, since we've added a fairly large number of options to run in support of common use cases, they span multiple resource types, the commands have existed for a long time, and they target Kubernetes's most central user-facing functionality, which has also existed for a fairly long time.

I'd argue that we should put more than typical effort into polished, opinionated/curated experiences for those 4 commands in particular, even if somewhat ad hoc, in support of very common use cases and reducing the learning curve.

Other current commands are more basic, narrow, and straightforward: clusterrole, deployment, namespace, pdb, quota, role, rolebinding, service, serviceaccount. These are more targeted towards generating boilerplate and scaffolding the API schema structure, to reduce work, the learning curve, and errors. These seem more amenable to more scalable approaches. My concerns about growing alternative schemas stand, however.

The alternatives don't appear to have been thoroughly vetted.

Have you seen codesolvent.com, yipee.io, or the vscode-based autocompletion? Maybe we could attach enough information to OpenAPI to build a generic resource attribute discovery wizard, even as a CLI or shell environment.

Or, we could use it to generate a resource with all default values, include fake names for things like images, which the user could then just edit, perhaps using kubectl set(*).

Or, we could hand-write examples with all default values and populate the example data in OpenAPI: https://swagger.io/docs/specification/adding-examples/

I kind of like that last one, actually. It would improve our generated API documentation, also.

I also want a generic search/browse mechanism that would allow us to build a catalog of more specific examples (e.g., Deployment for a java app), but that's a separate problem.

(*) Does kubectl set suffer from similar problems, or is it mainly generators that are painful? I do want set to work in the client. I'm hoping that by targeting super-common, stable use cases (e.g., set image) it will be easier to maintain.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that another advantage of example text is that it can include comments (assuming yaml).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of how to request an example, this is also sort of related to #123.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but there's still client-side logic that is required, such as reading and combining multiple files.

This can still be done dynamically

run and expose are also somewhat special, since we've added a fairly large number of options to run in support of common use cases, they span multiple resource types, the commands have existed for a long time, and they target Kubernetes's most central user-facing functionality, which has also existed for a fairly long time.

I think these commands cover too many use cases and can be a bit of a crutch. Folks have asked for more flags to be added to run for more special cases. I'd rather marshal our efforts around commands to manage config, and provide a few transformations.

@pwittrock
Copy link
Member Author

I am curious why you decided server-side generators were the way to go.

@bgrant0607
Copy link
Member

That server-side was or was not the way to go?

I touched on this in my other response, but another issues is that we changed these commands more frequently than our API contract would have allowed.

@pwittrock
Copy link
Member Author

There are 2 things being discussed here, I can split it up into 2 proposals:

  • A) stop compiling resource-specific generation commands (and in the future nearly all resource specific commands) into kubectl
  • B) improve the UX around our generation commands and the tools we provide users to learn about the system

Quick thought: How would users do this for CRDs?

I think CRDs will support subresources in the future, so this would work for them. Otherwise see response below.

create configmap and secret are somewhat special, since they transform non-API data in support of particular idioms/patterns. We do need an API-like representation for the declarative manifest work, but there's still client-side logic that is required, such as reading and combining multiple files.

I had actually been thinking about a technique similar to the one described here, but more powerful and flexible. TL;DR - support annotating Services (kubectl.k8s.io/command) and defining a convention that allows the Services to be exposed directly in the cli as commands. This would be done by mapping their request bodies to command line flags (protobuf or openapi model). Most fields could be done trivially with convention (e.g. "string", "string array"), and we could augment edge cases using metadata (protobuf options or openapi extensions). It should be possible to support a few high-level mapping flags that pointing to local resources - e.g. we could define a field type FileContents that maps for the flag -f to support generation for secrets.

I'd argue that we should put more than typical effort into polished, opinionated/curated experiences for those 4 commands in particular, even if somewhat ad hoc, in support of very common use cases and reducing the learning curve.

While I agree with this, it doesn't address the motivation of this proposal Version skew, extensibility, and maintainability. FWIW, @alexandercampbell tried to push for improvements in this area, but got stuck trying to get consensus on what the UX should be.

Have you seen codesolvent.com, yipee.io, or the vscode-based autocompletion? Maybe we could attach enough information to OpenAPI to build a generic resource attribute discovery wizard, even as a CLI or shell environment.
Or, we could use it to generate a resource with all default values, include fake names for things like images, which the user could then just edit, perhaps using kubectl set(*).

I'd consider these for improving B). How does this relate to A)? Are you suggesting them in additional to the generation commands we have, or are you suggesting deprecating and removing the generation commands?

Or, we could hand-write examples with all default values and populate the example data in OpenAPI: https://swagger.io/docs/specification/adding-examples/
I kind of like that last one, actually. It would improve our generated API documentation, also.

We already have this capability in our API documentation, and have examples for many types written defined. Putting this into the OpenAPI would make it possible to read form additional sources such as kubectl. Getting helping prioritizing writing the example documentation has been a challenge.

I also want a generic search/browse mechanism that would allow us to build a catalog of more specific examples (e.g., Deployment for a java app), but that's a separate problem.

Not certain if this is what you are asking, but we have do the stripe-docs-like capability to provide language specific examples for working with resources in our reference documentation. Most examples is always good, but does create a sustaining engineering burden.

(*) Does kubectl set suffer from similar problems, or is it mainly generators that are painful? I do want set to work in the client. I'm hoping that by targeting super-common, stable use cases (e.g., set image) it will be easier to maintain.

Set could be solved a number of ways, some of which are simpler. E.g. set for well-known types such as "image" or "scale" could be solved by compiling in the notion of "image" or "scale" into kubectl set and using duck-typing with the openapi object model to identify the fields to update. This could be augmented with openapi extensions to handle edge cases (e.g. if a Resource has multiple "replicas" fields, which one needs to be set).

Resource-specific sub-commands (e.g. something like set cassandra-version) would not fit into this pattern, but could be addressed using the "service as a command" technique above.

I don't think a subresource would work well for set, because set can be run against local files that may not exist in the cluster yet.

@pwittrock
Copy link
Member Author

I touched on this in my other response, but another issues is that we changed these commands more frequently than our API contract would have allowed.

We could probably get around this with the "Service as a cli command" approach.

}
```

The subresource will expose the API as a POST operation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why POST ? this command doesn't not actually create, also what about the endpoints of this new subresource ? do we need some new endpoints of apiserver, that means it also need some server side implementation to support this new command ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

POST means "do something" in HTTP. It doesn't mean create.

@fabiand
Copy link

fabiand commented Oct 24, 2017

For KubeVirt such a generic plan endpoint could be useful in a few cases, i.e.:

  • Starting a VM - Today this is done on the client side by converting one kind to another
  • Migrating a VM - Today this is done by creating a migration object

But - this approach and our work is heavy WIP, so it's just comment build on moving components.

@pwittrock pwittrock closed this Oct 26, 2017
@fabiand
Copy link

fabiand commented Oct 27, 2017

@pwittrock could you provide some context for the closure of this bug?

@pwittrock pwittrock reopened this Nov 1, 2017
@pwittrock
Copy link
Member Author

Reopening so folks can continue to discuss.

we end up doing for `kubectl get`.

`plan` and `plan-list` will use the discovery API to identify all resources with a `plan`
subcommand. It will lookup the request Model from the openapi, and check that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will this command deal with versioning of openapi? What requirements will it place on the openapi schema?

of introducing a `plan-list` command. We should make this consistent with whatever
we end up doing for `kubectl get`.

`plan` and `plan-list` will use the discovery API to identify all resources with a `plan`
Copy link
Contributor

@smarterclayton smarterclayton Nov 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re "list" to get a set of versions, kubernetes/kubernetes#53387 captures some of my thoughts on how we need to improve listing supported resources.

I'm inclined to do it via get and describe, special casing if necessary, for maximum user comprehension.

$ kubectl get apigroups
$ kubectl get apigroup/extensions
$ kubectl get apiversions extensions/v1beta1 # potentially special case this so that you can look up apiversions like a real user
$ kubectl get apiresources
$ kubectl get apiresources statefulsets.v1.apps
$ kubectl describe apiresources/statefulsets.v1.apps
$ kubectl explain --api-version=apps/v1 statefulsets

We can reserve those names (we kind of already have) and if a third party declares their own they can still use subsetting.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like the direction. Started to read through the issue.

nit: I don't love the arguments and flags differ between the various commands. The lack of consistency makes it hard to remember the format for each command. e.g. --api-version apps/v1 statefulsets (explain) vs apps/v1` (apiversions) vs statefulsets.v1.apps

Happy to talk through more once the concept is agreed upon.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the user know what is supported for each api? e.g. is scale supported for stateful set? How would the user know what supports plan (design)

@pwittrock
Copy link
Member Author

@bgrant0607 @smarterclayton

I added a section to the top that breaks this proposal into 3 separate components that collectively define the proposal.

  • Expose flatten / simple representations in the server side instead of in the client
  • Dynamically build client commands from discovery / openapi data (instead of hard coding it)
  • Expose kubectl command to design config from flags instead of creating objects from flags

@bgrant0607 my interpretation of your comments was that you were lukewarm on all of these. Is that correct?

@bgrant0607 bgrant0607 assigned bgrant0607 and unassigned dchen1107 and wojtek-t Nov 9, 2017
@k8s-github-robot
Copy link

This PR hasn't been active in 30 days. It will be closed in 59 days (Feb 4, 2018).

cc @bgrant0607 @pwittrock

You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 7, 2018
@fabiand
Copy link

fabiand commented Mar 9, 2018

One of the immediate things we could solve with this is to provide an action for the equivalent of kubectl patch offlinevirtualmachine testvm --type merge -p '{"spec":{"running":true}}'

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 27, 2018
@soltysh
Copy link
Contributor

soltysh commented Mar 27, 2018

/remove-lifecycle stale
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 27, 2018
@pwittrock
Copy link
Member Author

@soltysh And I came up with some good ideas around this at Kubecon. Will close this and reopen something different.

@pwittrock pwittrock closed this May 7, 2018
@fabiand
Copy link

fabiand commented May 8, 2018

That's cool. @pwittrock could you please CC on the new issue you open?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/design Categorizes issue or PR as related to design. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.