-
Notifications
You must be signed in to change notification settings - Fork 16.7k
alpha vs. beta annotations for PVCs #116
Comments
Hey @ligc, looking at the events from the pod, it looks like the PersistentVolumeClaim (PVC) isn't being bounded to a persistent volume (PV). You can check the state of your PVC using By default, these charts will request a Persistent Volume using PersistentVolumeClaims. If you want, you can disable this by overriding the custom.yaml
Does this point you in the right direction? |
HI @prydonius, thanks for the suggestion. I was able to deploy the stable/mariadb with "persistence.enabled=false". We only have nfs-based persistent volume available in our Kubernetes cluster, I created one nfs persistent volume, but the stable/mariadb could not bound to the nfs persistent volume, searched on internet, it seems to me that kubernetes does not support dynamic persistent volume with nfs backend. Is there any way to configure the stable/mariadb to use the nfs-based persistent volume? I read through the files in stable/mariadb/templates but did not find useful hints.
|
@ligc the PVC should be able to bind to your NFS PV. I think the reason it might not be, is because by default MariaDB looks for PVs with the annotation:
If you add this annotation to your PV, does it work? Alternatively, if you install using the following YAML file, does it work?
According to the documentation, an empty storage class will looks for PVs without the annotation. |
HI @prydonius, thanks, the "storage-class" is the right direction, but the two options you mentioned above do not seem work. What I figured out was that the key of the annotation should be "volume.beta.kubernetes.io/storage-class" instead of "volume.alpha.kubernetes.io/storage-class", after I changed it in the stable/mariadb/templates/pvc.yaml, then it works perfectly. I am not exactly sure why the alpha version is not working, http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html does indicate that the alpha version is still supported with Kubernetes 1.4, I am using Kubernetes 1.3.6. We may want to change to use volume.beta.kubernetes.io/storage-class in stable/mariadb/templates/pvc.yaml, I am seeing the same issue with almost all the charts under http://storage.googleapis.com/kubernetes-charts/stable/... |
Indeed, that's really strange, especially since you're on 1.3 and the annotation was alpha then. I think we decided to keep the alpha annotation since 1.4 is supposed to support it, but if, as you say beta works, then we should probably update the charts. I am confused by this though, given the way it should work was outlined in this issue #43 (comment). |
The alpha version of dynamic provisioning, triggered by the The beta version of dynamic provisioning, triggered by the |
Thanks for the clarification, in that case I think we should change the charts to use the beta annotation. @saad-ali what would a good default value for the annotation be? We're currently using "generic", but I'm not sure if that has any significance. From the documentation, it sounds like a better default value would be to leave it as an empty string. |
The beta version of dynamic provisioning requires the value of the annotation to correspond to the A value of empty string for the beta annotation is a mechanism to disable dynamic provisioning: A PVC with a beta annotation and an empty-string value is bound only to pre-provisioned unbound PV object that fulfill the claim and have no storage class specified. If no such volumes exists, the claim remains unbound. If you want your This is designed so that the decision of how a request for storage is fulfilled, is in the hands of the cluster administrator (via |
Thanks a lot for the explanations @saad-ali, this is really useful. In that case, I believe we should update the charts to conditionally set the annotation only if the Thoughts @viglesiasce? |
More on this. See kubernetes/kubernetes#31617 (comment) and http://kubernetes.io/docs/user-guide/persistent-volumes/#writing-portable-configuration. Essentially, we should conditionally set the beta annotation if a storage class is provided, or revert to an alpha annotation if not. The above PR will add storage classes by default to AWS, GKE, GCE, OpenStack clusters; which will enable us to always set the beta annotation. However, the earliest this might be available is 1.6. |
Let's all +1 on kubernetes/kubernetes#31617 to let the author know his PR is important to us. |
I just got bit by this. The majority of all charts in this repo that use persistent volumes don't currently work in multi-zone clusters. Can I suggest that the easiest path forward, even if it's only an interim solution is to include both the afaik, k8s 1.3 will completely ignore imho, this would be a great way to restore many charts to a working state for the 1.4.x crowd, and it doesn't prevent further discussion on this matter from continuing as things evolve further. Edit: To cover our bases, it would be good to test this idea further before committing to it, but as noted, my initial attempt has shown it to be promising. |
Can we get agreement on the best outcome here? It currently seems to me that we can add both annotations to ensure that it works for 1.3 and 1.4 clusters. Is that statement correct? |
(xref discussion in kubernetes/kubernetes#37239) Ultimately we want to have users not worry about how their storage is fulfilled (unless they really care about it)--only cluster admins should worry about how storage is fulfilled. Therefore, ideally, users should just create a PVC with no annotation. The cluster admin should decide how the PVC should be fulfilled, to do so they can:
A beta dynamic provisioning annotation is really something only a user should set because it is the mechanism for selecting a custom dynamic storage provisioner (e.g. I want an SSD instead of spinning disk, so I will select the The problem, as you've all run into, is if a cluster admin 1) did not manually create PV objects, and 2) did not a create a default StorageClass, then PVCs (without annotation) will remain unbound. The suggestions I've seen are:
The problem you are trying to solve is what happens if the cluster admin doesn't specify a way to fulfill storage. And the answer, while not great, is that the request for storage remains outstanding until the resources to fulfill it are available. My suggestion is to specify no annotation by default, but to let users pass through a beta StorageClass if they want to (to select a class if they prefer). The problem with this approach is that it won't "just work" for cloud providers at the moment (in 1.4 and 1.5), until 1.6. So if you really want a better user experience for cloud provider users, then I guess you could stick with the alpha annotation for now, and once 1.6 ships, switch to no annotation (w/optional StorageClass pass through). |
That's a very academic answer. It's well thought out, and while I appreciate many of the points laid out, the fact remains that in a multi-zone cluster with 1.4, many of the charts in this repo which currently use the alpha annotation simply do not work. Waiting for 1.6 so that this can be rectified "the right way" means those charts remain unusable on such clusters in the interim. |
Thanks for the detailed thoughts @saad-ali @krancour. It's really important to us to provide a good out of the box experience, so may I suggest the following approach:
I think this provides a good-enough default, whilst also making it easy to use storage classes (and of course there's the option to disable persistence entirely if a cluster just cannot support it). @mgoodness uses this approach in the Prometheus chart: #235. If there is agreement on this approach, then I can update all the charts to follow it. How do we want to communicate to users that a PVC may remain unbound indefinitely causing their application not to start? Is there something we can add to the chart notes? It would be good to think of a way to make it more obvious to users that they may need to disable persistence or specify a storage class for the chart to work on their cluster. |
@prydonius you beat me to it! Yes I think that is the best thing we can do for now. Let's figure out the unbound issue in a separate thread. |
@prydonius what would be wrong with keeping things as they are, but simply adding the beta annotation alongside any place the alpha annotation is currently used? It seems to me that by doing so, the current behavior of the charts for clusters < 1.3 is not changed in any way from what it currently is and that for clusters >= 1.3, it fixes the current problem. |
@krancour I believe this will break the "just works" case for >= 1.4 since you need to have a storage class. Since >= 1.4 still supports the alpha annotation and dynamically provisions PVs on most cloud providers without needing a storage class, I think this is the best default for when a storage class isn't provided. Does that make sense? I should have a PR for this change to all charts soon. |
That wasn't clear to me that the beta provisioner didn't replace the alpha provisioner. |
Anoter datapoint is that I'm unable to use alpha for my Ceph RBD storage class. See kubernetes/kubernetes#39201, |
I'm hitting this as well with a bare metal cluster and glusterfs (NFS) StorageClass. It looks like some charts have been updated (https://github.com/kubernetes/charts/search?utf8=%E2%9C%93&q=volume.beta) and some haven't (postgresql, redis, gitlab-ce). It looks like #255 was going for a chart-wide update but it got dropped. Should individual PRs be created to update the older charts? |
@ravishivt. @prydonius is going to take an individual audit and file an issue. Happy to field any PRs that address this issue in the mean time. #235 is the standard approach we are following. |
The only think I don't like about the approach in #235 is that it's not possible to leverage the default StorageClass by not providing an annotation. It's not a big deal though. I just have to make sure all my k8s users get the StorageClass's name right.
|
@ravishivt that's a good point. I think for now we should stick with the approach in #235, and the default storage class use-case will be covered when we update charts for k8s 1.6. Thanks for everyone's input on this, I've created #520 to track the PRs @AmandaCameron has contributed. |
Resolved. Closing |
…hader-with-latest-iiif [INF-158] Update MITM Shader Chart with new IIIF
I am using helm to install the chart stable/mariadb, the deploy was initialized successfully, but never finish, the pod status stuck at "Init:0/1".
Helm version: 2.0.0-alpha.5
Kubernetes version 1.3.6:
Steps and output:
The text was updated successfully, but these errors were encountered: