-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReadMany PVCs #4579
Comments
Yes, I think that's accurate, although we've recently begun to wonder whether we should allow arbitrary volumes, but limit all mounts to read-only. There are still ways users can shoot themselves in the foot with this, but they cannot write out state to volumes. cc @dgerd |
I don’t see why we would impose additional limitations beyond what kube does. If someone can run their app in kube with write volume support, and scale it via deployments, I don’t see any reason to block that same app from running under Knative.
…Sent from my iPad
On Jun 30, 2019, at 6:45 PM, Matt Moore ***@***.***> wrote:
Yes, I think that's accurate, although we've recently begun to wonder whether we should allow arbitrary volumes, but limit all mounts to read-only. There are still ways users can shoot themselves in the foot with this, but they cannot write out state to volumes.
cc @dgerd
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
#4417 is related to this.
This has been a technical challenge to enabling support for additional volume types, and one of the primary reasons for moving it currently out of scope. Some other reasons that Matt mentioned on an earlier thread:
I would add that it also increases Knative API and Runtime surface area. Adding this isn't just passing the parameters down to the K8s deployment, but also means we need to add additional webhook validation, unit tests, e2e tests, samples, specification, user documentation, and on-going feature maintenance. The way we expose it to users today will limit our ability to expose it differently in the future. We should make sure we are happy with the operations, performance, security, and scalability. All that said, I am happy to start having the discussions on how we can make progress here to meet your use-case and how we can mitigate or work through the questions and concerns stated above. |
re: "it's not just a pass thru" - that's true about everything - but at least in this case we're not inventing something from scratch (like the entire eventing infrastructure). We're using a pretty well tested base component. |
It’s worth being aware that the KFserving project referenced above is working on top of knative and is using mutating webhooks to modify resources created by knative. That’s being done to work around the restrictions. I don’t know whether knative would regard this as a supported way of using the tools. |
Look me a second to find the KFserving reference you're talking about :-) so for others, I think this is it: kserve/kserve#129 IMO you shouldn't have to jump through those kinds of hoops just to get access to core Kube features. |
I would like to add my support to this and the other issues. Preventing people from using core k8s functionality is counter-intuitive and limiting. K8s allows scaling whilst using volumes; so should Knative. Please reconsider the "no volumes" stance. In my use case, I want to use an As a workaround, the application code has had to implement custom read/write methods using the Azure libraries. I feel this is a complete waste of time as it is already solved by k8s. |
Issues go stale after 90 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle stale |
Stale issues rot after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /lifecycle rotten |
Rotten issues close after 30 days of inactivity. Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra. /close |
@knative-housekeeping-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
any update on this? |
1 similar comment
any update on this? |
Feature track doc is here, anyone interested pls review. |
Is this being tracked anywhere? Not sure why this isn’t supported, as the issues with using PVCs are fairly obvious. Hoping this can be a choice made available soon. |
Whether to support PVCs has been discussed before and there have been a couple of comments on closed threads asking what the conclusion was - #2025 (comment) and #2260 (comment). (There are also projects making decisions about using knative serving that relate to this question #4307). My impression is that the position is that ReadMany volumes would be ok but only ReadMany because of concerns about workloads under the same kservice getting inconsistent versions of mutable data. Currently there’s not been any mechanism identified to limit Volumes to ReadMany so for now all PVCs remain out of scope. Am I summarising this fairly accurately? I don’t mean to reopen the debate unnecessarily, just to get clarity on the current position.
The text was updated successfully, but these errors were encountered: