Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

topology.kubernetes.io/zone label on pv #675

Closed
nirroz93 opened this issue Dec 27, 2020 · 14 comments
Closed

topology.kubernetes.io/zone label on pv #675

nirroz93 opened this issue Dec 27, 2020 · 14 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@nirroz93
Copy link

nirroz93 commented Dec 27, 2020

Is your feature request related to a problem?/Why is this needed
Hi!
In the in-tree provisioner it used to add a label failure-domain.beta.kubernetes.io/zone or topology.kubernetes.io/zone label on the PV itself.

It will nice to have the label to easily see all volumes in AZ and such (instead of parsing the nodeAffinity)
Describe the solution you'd like in detail
I know the CSI adds similar label in nodeSelectorTerms but it's nice to have the standard label on the volume metadata as well (or the topology.ebs.csi.aws.com/zone label).

Is there a way to do it (some flag I'm missing?) if not, this feature will be nice.

The only workaround I see is to run a pod that will add the label post-creation. But I think it should be created by the driver (common label+no harm can be done by this)

Example
Using provisioner version 0.7.1 (helm chart 0.6.2)

kubectl describe pv pvc-6b120736-4071-401d-8ebe-7fd5c2b0c86e
Name:              pvc-6b120736-4071-401d-8ebe-7fd5c2b0c86e
Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers:        [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass:      gp2-csi
Status:            Bound
Claim:             myclaim
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          50Gi
Node Affinity:     
  Required Terms:  
    Term 0:        topology.ebs.csi.aws.com/zone in [us-east-1c]
Message:           
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            ebs.csi.aws.com
    FSType:            xfs
    VolumeHandle:      vol-xxxxxxxxxxxxxxx
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1607444912778-8081-ebs.csi.aws.com
Events:                <none>

Should be with the label of topology.kubernetes.io/zone
P.S
According to https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#topologykubernetesiozone the label should be on the PV in any topology-aware volume provisioners

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 27, 2021
@nirroz93
Copy link
Author

/close
Fixed by #773

@k8s-ci-robot
Copy link
Contributor

@nirroz93: Closing this issue.

In response to this:

/close
Fixed by #773

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@reasonerjt
Copy link

reasonerjt commented Nov 4, 2021

It doesn't work on my env, have you verified it? @nirroz93

@nirroz93
Copy link
Author

nirroz93 commented Nov 4, 2021

It doesn't work on my env, have you verified it? @nirroz93

Don't remember if I verified it, I probably didn't, and before marking non stale checked PRs and saw the one I mentioned (that I thought fixed the issue)

@TBBle
Copy link

TBBle commented Nov 5, 2021

I don't think #773 addresses this request, it instead tells CSI to honour the existing label on the node, which might have changed the value of

I know the CSI adds similar label in nodeSelectorTerms

from topology.ebs.csi.aws.com/zone to topology.kubernetes.io/zone.

Edit: From the following comment, it looks like it didn't change the nodeSelector used, it's still using topology.ebs.csi.aws.com/zone.

@kaypeter87
Copy link

kaypeter87 commented Nov 17, 2021

@nirroz93 This is also not working in my environment. None of the persistent volumes that are dynamically provisioned by the CSI driver have these topology AZs in the labels field:

Name:              pvc-6b15c0ab-39d6-454d-819b-f9df54fbfbb3
Labels:            <none>
Annotations:       pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers:        [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass:      ebs-gp3
Status:            Bound
Claim:             vault-dev/audit-vault-dev-4
Reclaim Policy:    Delete
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:     
  Required Terms:  
    Term 0:        topology.ebs.csi.aws.com/zone in [us-east-1a]
Message:           
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            ebs.csi.aws.com
    FSType:            ext4
    VolumeHandle:      vol-006600dae9c81124c
    ReadOnly:          false
    VolumeAttributes:      storage.kubernetes.io/csiProvisionerIdentity=1626390895534-8081-ebs.csi.aws.com
Events:                <none>

@TBBle
Copy link

TBBle commented Nov 17, 2021

/reopen

Not sure I can do this, but the request has not been fulfilled.

@k8s-ci-robot
Copy link
Contributor

@TBBle: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Not sure I can do this, but the request has not been fulfilled.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wongma7
Copy link
Contributor

wongma7 commented Nov 17, 2021

/reopen

Sorry if issues are getting closed prematurely, there are a couple issues open about this and I haven't triaged all yet, but I will pick one to track (whichever is oldest) then close the others as a duplicate.

Basically to give an update, at the moment it is intended that the CSI driver uses its own custom labels. We reverted the change to use the GA labels.

The Controller component (the one creating the PVs with topology label) and the Node component (the one populating Nodes with the topology label) must agree on a label. Otherwise, imagine a scenario where the controller component creates PVs with topology requirements that no Nodes can satisfy because the label keys don't match. At the moment the easiest way to guarantee this is by using a custom label topology.ebs.csi.aws.com/zone

The driver could switch to the standard label topology.kubernetes.io/zone but it is not trivial as we initially thought, i.e. we can't just find/replace the custom label with the standard one. Typically the topology.kubernetes.io/zone label gets populated by the AWS cloud provider. So the Node component can't overwrite it. What the driver could do is have the Node component continue to populate Nodes with topology.ebs.csi.aws.com/zone and then have the Controller component create PVs with the standard label topology.kubernetes.io/zone under the assumption that Nodes have that label already. Easier to illustrate than put into words:

Ideal scenario:

On Nodes:

  • topology.kubernetes.io/zone = us-east-1a
  • topology.ebs.csi.aws.com/zone = us-east-1a

On PVs created by older driver v1.4.0:

  • topology.ebs.csi.aws.com/zone = us-east-1a

On PVs created by newer driver v1.x.0:

  • topology.kubernetes.io/zone = us-east-1a

@k8s-ci-robot k8s-ci-robot reopened this Nov 17, 2021
@k8s-ci-robot
Copy link
Contributor

@wongma7: Reopened this issue.

In response to this:

/reopen

Sorry if issues are getting closed prematurely, there are a couple issues open about this and I haven't triaged all yet, but I will pick one to track (whichever is oldest) then close the others as a duplicate.

Basically to give an update, at the moment it is intended that the CSI driver uses its own custom labels. We reverted the change to use the GA labels.

The Controller component (the one creating the PVs with topology label) and the Node component (the one populating Nodes with the topology label) must agree on a label. Otherwise, imagine a scenario where the controller component creates PVs with topology requirements that no Nodes can satisfy because the label keys don't match. At the moment the easiest way to guarantee this is by using a custom label topology.ebs.csi.aws.com/zone

The driver could switch to the standard label topology.kubernetes.io/zone but it is not trivial as we initially thought, i.e. we can't just find/replace the custom label with the standard one. Typically the topology.kubernetes.io/zone label gets populated by the AWS cloud provider. So the Node component can't overwrite it. What the driver could do is have the Node component continue to populate Nodes with topology.ebs.csi.aws.com/zone and then have the Controller component create PVs with the standard label topology.kubernetes.io/zone under the assumption that Nodes have that label already. Easier to illustrate than put into words:

Ideal scenario:

On Nodes:

  • topology.kubernetes.io/zone = us-east-1a
  • topology.ebs.csi.aws.com/zone = us-east-1a

On PVs created by older driver v1.4.0:

  • topology.ebs.csi.aws.com/zone = us-east-1a

On PVs created by newer driver v1.x.0:

  • topology.kubernetes.io/zone = us-east-1a

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants