Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ReadMany PVCs #4579

Closed
ryandawsonuk opened this issue Jun 30, 2019 · 17 comments
Closed

ReadMany PVCs #4579

ryandawsonuk opened this issue Jun 30, 2019 · 17 comments
Labels
kind/question Further information is requested lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ryandawsonuk
Copy link

Whether to support PVCs has been discussed before and there have been a couple of comments on closed threads asking what the conclusion was - #2025 (comment) and #2260 (comment). (There are also projects making decisions about using knative serving that relate to this question #4307). My impression is that the position is that ReadMany volumes would be ok but only ReadMany because of concerns about workloads under the same kservice getting inconsistent versions of mutable data. Currently there’s not been any mechanism identified to limit Volumes to ReadMany so for now all PVCs remain out of scope. Am I summarising this fairly accurately? I don’t mean to reopen the debate unnecessarily, just to get clarity on the current position.

@ryandawsonuk ryandawsonuk added the kind/question Further information is requested label Jun 30, 2019
@mattmoor
Copy link
Member

Yes, I think that's accurate, although we've recently begun to wonder whether we should allow arbitrary volumes, but limit all mounts to read-only. There are still ways users can shoot themselves in the foot with this, but they cannot write out state to volumes.

cc @dgerd

@mattmoor mattmoor added this to the Ice Box milestone Jun 30, 2019
@duglin
Copy link

duglin commented Jul 1, 2019 via email

@dgerd
Copy link

dgerd commented Jul 1, 2019

#4417 is related to this.

Currently there’s not been any mechanism identified to limit Volumes to ReadMany so for now all PVCs remain out of scope. Am I summarising this fairly accurately?

This has been a technical challenge to enabling support for additional volume types, and one of the primary reasons for moving it currently out of scope. Some other reasons that Matt mentioned on an earlier thread:

  1. Typically impacts cold start latency -- Is this something that users realize? What is the exact impact? Do users expect the same autoscaling performance with and without volumes? Do we have different behavior for autoscaling Services with PVCs?
  2. HostPath volumes are an Operator's nightmare -- Should PVC be supported on all installs of Knative? Due to complexity of managing and security risks there is potential that it may not be offered by all vendors or installations. Not having it everywhere is bad for Knative portability. How do we mitigate this?

I would add that it also increases Knative API and Runtime surface area. Adding this isn't just passing the parameters down to the K8s deployment, but also means we need to add additional webhook validation, unit tests, e2e tests, samples, specification, user documentation, and on-going feature maintenance. The way we expose it to users today will limit our ability to expose it differently in the future. We should make sure we are happy with the operations, performance, security, and scalability.

All that said, I am happy to start having the discussions on how we can make progress here to meet your use-case and how we can mitigate or work through the questions and concerns stated above.

@duglin
Copy link

duglin commented Jul 9, 2019

  1. If it impact latency, and they care, then they won't use volumes - or they'll deal with it. People aren't stupid - we need to let them make the tough decisions for their needs. We're not in a position to make it for everyone. If an offering that sits on top of Kn wants to block it, that's their choice - but as a reusable component Kn itself should be fairly flexible. One of the appealing aspects of Knative to many people is the re-use of Kube under the covers. If we remove many of the popular features of Kube from Kn users then what's the point of using Kube? And what's the advantage of Kn over any other PaaS/FaaS/Serverless/whatever?
  2. How is this different from Volumes on Kube itself? I don't see why we would need to be more (or less) concerned than Kube is about these aspects. Again, let the user decide.

re: "it's not just a pass thru" - that's true about everything - but at least in this case we're not inventing something from scratch (like the entire eventing infrastructure). We're using a pretty well tested base component.

@ryandawsonuk
Copy link
Author

It’s worth being aware that the KFserving project referenced above is working on top of knative and is using mutating webhooks to modify resources created by knative. That’s being done to work around the restrictions. I don’t know whether knative would regard this as a supported way of using the tools.

@duglin
Copy link

duglin commented Jul 10, 2019

Look me a second to find the KFserving reference you're talking about :-) so for others, I think this is it: kserve/kserve#129

IMO you shouldn't have to jump through those kinds of hoops just to get access to core Kube features.

@philwinder
Copy link

I would like to add my support to this and the other issues. Preventing people from using core k8s functionality is counter-intuitive and limiting. K8s allows scaling whilst using volumes; so should Knative. Please reconsider the "no volumes" stance.

In my use case, I want to use an AzureFile PVC in ReadWriteMany mode, so that all pods are reading and writing from the same bucket. I accept the consistency issues, because consistency isn't a requirement for our particular problem. Ideally I would like to use a PVC for this, but worst case I should be able to use a Volume that doesn't force the use of readOnly.

As a workaround, the application code has had to implement custom read/write methods using the Azure libraries. I feel this is a complete waste of time as it is already solved by k8s.

@knative-housekeeping-robot

Issues go stale after 90 days of inactivity.
Mark the issue as fresh by adding the comment /remove-lifecycle stale.
Stale issues rot after an additional 30 days of inactivity and eventually close.
If this issue is safe to close now please do so by adding the comment /close.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/lifecycle stale

@knative-prow-robot knative-prow-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 27, 2019
@knative-housekeeping-robot

Stale issues rot after 30 days of inactivity.
Mark the issue as fresh by adding the comment /remove-lifecycle rotten.
Rotten issues close after an additional 30 days of inactivity.
If this issue is safe to close now please do so by adding the comment /close.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/lifecycle rotten

@knative-prow-robot knative-prow-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2020
@knative-housekeeping-robot

Rotten issues close after 30 days of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh by adding the comment /remove-lifecycle rotten.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/close

@knative-prow-robot
Copy link
Contributor

@knative-housekeeping-robot: Closing this issue.

In response to this:

Rotten issues close after 30 days of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh by adding the comment /remove-lifecycle rotten.

Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pradeep-mishra
Copy link

any update on this?

1 similar comment
@plsmaop
Copy link

plsmaop commented Mar 7, 2021

any update on this?

@skonto
Copy link
Contributor

skonto commented Sep 7, 2021

@plsmaop I am working on the feature track doc, I am on this. A similar question was asked recently: #11742.

@skonto
Copy link
Contributor

skonto commented Sep 9, 2021

Feature track doc is here, anyone interested pls review.

@dprotaso dprotaso removed this from the Ice Box milestone Oct 6, 2021
@marcjimz
Copy link

Is this being tracked anywhere? Not sure why this isn’t supported, as the issues with using PVCs are fairly obvious. Hoping this can be a choice made available soon.

@skonto
Copy link
Contributor

skonto commented Jan 17, 2022

@marcjimz hi, here #12438

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Further information is requested lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests