-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl plan proposal #1142
kubectl plan proposal #1142
Conversation
cc @apelisse |
cc @liggitt |
cc @kubernetes/sig-cli-proposals |
|
||
## TL;DR | ||
|
||
Introduce a new `plan` command similar to `kubectl create --dry-run -o yaml` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of a new command group.
Please see my comment here:
kubernetes/kubernetes#30558 (comment)
I am in favor of simplifying the flags and cleaning up the output.
Also, I don't find the term plan
to be intuitive. Is that borrowed from Terraform?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it was suggested by @jbeda for that reason. I am not tied to the name.
I am interested to get your feedback on adding support for implementing the create
subcommands as sub resources on the server side, and dynamically discovering them and their arguments through the discovery service + openapi extensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re. create: I've thought about that in the past:
kubernetes/kubernetes#5280
I have significant reservations, though. We've added a number of options to these generations. As with template parameterization, there is a slippery slope where we just develop an alternative API schema that's almost as complex but different than the current one.
My instinct is to find another way to do this.
What's your main reason for wanting a server-side implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's your main reason for wanting a server-side implementation?
There are 2 problems I would like to solve and think dynamically discovered server-side commands would provide a solution:
- Tie the resource-specific commands to the resource APIs actually available in the cluster.
- In a versioned skewed environment, as a kubectl user, I would like kubectl to show and support the resource specific commands/subcommands for the APIs present in the cluster, rather than those compiled into the client.
- When using extension APIs, as a kubectl user, I would like to be able to view and use resource specific commands/subcommands of the extension APIs in the cluster without separately installing a compiled plugin (e.g. simplifies questions such as: what if the extension is in some cluster but not others? or what if my corporate IT policy doesn't allow me to run arbitrary binaries on my computer?).
- Allow end-to-end development of APIs to be performed without changing kubectl code for things like the
create resource name
Currently, adding a new API to Kubernetes results in new specific kubectl commands commands added for that resource. The code for these commands lives in the kubectl code base, but the ownership of their maintenance is unclear (is it the maintainer of the API code, or the maintainer of the kubectl codebase). We frequently discover issues in kubectl API-specific commands late into the release cycle, and resolving these issues generally requires a fire drill from both an API maintainer and an kubectl maintainer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Version skew, extensibility, and maintainability -- reasonable objectives.
Quick thought: How would users do this for CRDs?
create configmap and secret are somewhat special, since they transform non-API data in support of particular idioms/patterns. We do need an API-like representation for the declarative manifest work, but there's still client-side logic that is required, such as reading and combining multiple files.
run and expose are also somewhat special, since we've added a fairly large number of options to run in support of common use cases, they span multiple resource types, the commands have existed for a long time, and they target Kubernetes's most central user-facing functionality, which has also existed for a fairly long time.
I'd argue that we should put more than typical effort into polished, opinionated/curated experiences for those 4 commands in particular, even if somewhat ad hoc, in support of very common use cases and reducing the learning curve.
Other current commands are more basic, narrow, and straightforward: clusterrole, deployment, namespace, pdb, quota, role, rolebinding, service, serviceaccount. These are more targeted towards generating boilerplate and scaffolding the API schema structure, to reduce work, the learning curve, and errors. These seem more amenable to more scalable approaches. My concerns about growing alternative schemas stand, however.
The alternatives don't appear to have been thoroughly vetted.
Have you seen codesolvent.com, yipee.io, or the vscode-based autocompletion? Maybe we could attach enough information to OpenAPI to build a generic resource attribute discovery wizard, even as a CLI or shell environment.
Or, we could use it to generate a resource with all default values, include fake names for things like images, which the user could then just edit, perhaps using kubectl set(*).
Or, we could hand-write examples with all default values and populate the example data in OpenAPI: https://swagger.io/docs/specification/adding-examples/
I kind of like that last one, actually. It would improve our generated API documentation, also.
I also want a generic search/browse mechanism that would allow us to build a catalog of more specific examples (e.g., Deployment for a java app), but that's a separate problem.
(*) Does kubectl set
suffer from similar problems, or is it mainly generators that are painful? I do want set
to work in the client. I'm hoping that by targeting super-common, stable use cases (e.g., set image) it will be easier to maintain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that another advantage of example text is that it can include comments (assuming yaml).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of how to request an example, this is also sort of related to #123.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but there's still client-side logic that is required, such as reading and combining multiple files.
This can still be done dynamically
run and expose are also somewhat special, since we've added a fairly large number of options to run in support of common use cases, they span multiple resource types, the commands have existed for a long time, and they target Kubernetes's most central user-facing functionality, which has also existed for a fairly long time.
I think these commands cover too many use cases and can be a bit of a crutch. Folks have asked for more flags to be added to run for more special cases. I'd rather marshal our efforts around commands to manage config, and provide a few transformations.
I am curious why you decided server-side generators were the way to go. |
That server-side was or was not the way to go? I touched on this in my other response, but another issues is that we changed these commands more frequently than our API contract would have allowed. |
There are 2 things being discussed here, I can split it up into 2 proposals:
I think CRDs will support subresources in the future, so this would work for them. Otherwise see response below.
I had actually been thinking about a technique similar to the one described here, but more powerful and flexible. TL;DR - support annotating Services (kubectl.k8s.io/command) and defining a convention that allows the Services to be exposed directly in the cli as commands. This would be done by mapping their request bodies to command line flags (protobuf or openapi model). Most fields could be done trivially with convention (e.g. "string", "string array"), and we could augment edge cases using metadata (protobuf options or openapi extensions). It should be possible to support a few high-level mapping flags that pointing to local resources - e.g. we could define a field type
While I agree with this, it doesn't address the motivation of this proposal
I'd consider these for improving B). How does this relate to A)? Are you suggesting them in additional to the generation commands we have, or are you suggesting deprecating and removing the generation commands?
We already have this capability in our API documentation, and have examples for many types written defined. Putting this into the OpenAPI would make it possible to read form additional sources such as kubectl. Getting helping prioritizing writing the example documentation has been a challenge.
Not certain if this is what you are asking, but we have do the stripe-docs-like capability to provide language specific examples for working with resources in our reference documentation. Most examples is always good, but does create a sustaining engineering burden.
Set could be solved a number of ways, some of which are simpler. E.g. set for well-known types such as "image" or "scale" could be solved by compiling in the notion of "image" or "scale" into Resource-specific sub-commands (e.g. something like I don't think a subresource would work well for |
We could probably get around this with the "Service as a cli command" approach. |
} | ||
``` | ||
|
||
The subresource will expose the API as a POST operation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why POST ? this command doesn't not actually create, also what about the endpoints of this new subresource ? do we need some new endpoints of apiserver, that means it also need some server side implementation to support this new command ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
POST means "do something" in HTTP. It doesn't mean create.
For KubeVirt such a generic
But - this approach and our work is heavy WIP, so it's just comment build on moving components. |
@pwittrock could you provide some context for the closure of this bug? |
Reopening so folks can continue to discuss. |
we end up doing for `kubectl get`. | ||
|
||
`plan` and `plan-list` will use the discovery API to identify all resources with a `plan` | ||
subcommand. It will lookup the request Model from the openapi, and check that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will this command deal with versioning of openapi? What requirements will it place on the openapi schema?
of introducing a `plan-list` command. We should make this consistent with whatever | ||
we end up doing for `kubectl get`. | ||
|
||
`plan` and `plan-list` will use the discovery API to identify all resources with a `plan` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re "list" to get a set of versions, kubernetes/kubernetes#53387 captures some of my thoughts on how we need to improve listing supported resources.
I'm inclined to do it via get and describe, special casing if necessary, for maximum user comprehension.
$ kubectl get apigroups
$ kubectl get apigroup/extensions
$ kubectl get apiversions extensions/v1beta1 # potentially special case this so that you can look up apiversions like a real user
$ kubectl get apiresources
$ kubectl get apiresources statefulsets.v1.apps
$ kubectl describe apiresources/statefulsets.v1.apps
$ kubectl explain --api-version=apps/v1 statefulsets
We can reserve those names (we kind of already have) and if a third party declares their own they can still use subsetting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like the direction. Started to read through the issue.
nit: I don't love the arguments and flags differ between the various commands. The lack of consistency makes it hard to remember the format for each command. e.g. --api-version apps/v1 statefulsets
(explain) vs apps/v1` (apiversions) vs statefulsets.v1.apps
Happy to talk through more once the concept is agreed upon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How will the user know what is supported for each api? e.g. is scale supported for stateful set
? How would the user know what supports plan
(design
)
I added a section to the top that breaks this proposal into 3 separate components that collectively define the proposal.
@bgrant0607 my interpretation of your comments was that you were lukewarm on all of these. Is that correct? |
This PR hasn't been active in 30 days. It will be closed in 59 days (Feb 4, 2018). You can add 'keep-open' label to prevent this from happening, or add a comment to keep it open another 90 days |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
One of the immediate things we could solve with this is to provide an action for the equivalent of |
/remove-lifecycle stale |
@soltysh And I came up with some good ideas around this at Kubecon. Will close this and reopen something different. |
That's cool. @pwittrock could you please CC on the new issue you open? |
No description provided.