Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pod has unbound immediate PersistentVolumeClaims #95

Closed
matusnovak opened this issue Apr 29, 2021 · 9 comments
Closed

pod has unbound immediate PersistentVolumeClaims #95

matusnovak opened this issue Apr 29, 2021 · 9 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@matusnovak
Copy link

matusnovak commented Apr 29, 2021

Can't get it to work. The test-pod fails with:

Warning FailedScheduling 15s (x2 over 15s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

I have done the following:

1. I have exported the folder in /etc/exports as:

/export         192.168.176.0/24(rw,sync,fsid=0,crossmnt,no_subtree_check,no_root_squash,sec=sys)
/export/example 192.168.176.0/24(rw,sync,no_subtree_check,no_root_squash,sec=sys)

2. I have verified that this NFS export works by connecting to it from another machine and I am able to create files in it.

3. I have used Helm to install it and provided the following parameters:

--set nfs.server=192.168.176.131 --set nfs.path=/export/example

The nfs pod is running. describe pod output:

Name:         nfs-subdir-external-provisioner-797d858c5c-zvrvb
Namespace:    default
Priority:     0
Node:         ubuntu/192.168.176.131
Start Time:   Thu, 29 Apr 2021 21:10:01 +0000
Labels:       app=nfs-subdir-external-provisioner
              pod-template-hash=797d858c5c
              release=nfs-subdir-external-provisioner
Annotations:  cni.projectcalico.org/podIP: 10.1.243.199/32
              cni.projectcalico.org/podIPs: 10.1.243.199/32
Status:       Running
IP:           10.1.243.199
IPs:
  IP:           10.1.243.199
Controlled By:  ReplicaSet/nfs-subdir-external-provisioner-797d858c5c
Containers:
  nfs-subdir-external-provisioner:
    Container ID:   containerd://6588820a7513a3815c65711fe6e3310e18d83a9f817d701de3bf13e9f8dfb9dc
    Image:          k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
    Image ID:       k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 29 Apr 2021 21:10:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.176.131
      NFS_PATH:          /export/example
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-subdir-external-provisioner-token-j559x (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.176.131
    Path:      /export/example
    ReadOnly:  false
  nfs-subdir-external-provisioner-token-j559x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-subdir-external-provisioner-token-j559x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  7m43s  default-scheduler  Successfully assigned default/nfs-subdir-external-provisioner-797d858c5c-zvrvb to ubuntu
  Normal  Pulled     7m43s  kubelet            Container image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
  Normal  Created    7m43s  kubelet            Created container nfs-subdir-external-provisioner
  Normal  Started    7m43s  kubelet            Started container nfs-subdir-external-provisioner

4. I have deployed the example:

kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

But it fails with "pod has unbound immediate PersistentVolumeClaims". describe pod output:

Name:         test-pod
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  test-pod:
    Image:      gcr.io/google_containers/busybox:1.24
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
    Args:
      -c
      touch /mnt/SUCCESS && exit 0 || exit 1
    Environment:  <none>
    Mounts:
      /mnt from nfs-pvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-d9njl (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  nfs-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  test-claim
    ReadOnly:   false
  default-token-d9njl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-d9njl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  1s (x12 over 10m)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

I have also tried setting folder permissions for the exported NFS share as 777 but that did not help.

The nfsstat -m reports the following:

$ nfsstat -m
/var/snap/microk8s/common/var/lib/kubelet/pods/98a24993-b7b9-4ebd-9d98-bd11aaa87aaa/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root from 192.168.176.131:/export/example
 Flags: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=2049,timeo=600,retrans=2,sec=sys,mountaddr=192.168.176.131,mountvers=3,mountport=35741,mountproto=tcp,local_lock=none,addr=192.168.176.131

What am I doing wrong?

@yonatankahana
Copy link
Contributor

can you show us the provisioner logs by kubectl -n default logs nfs-subdir-external-provisioner-797d858c5c-zvrvb
and maybe also the PVC description by kubectl -n default describe pvc test-claim

@matusnovak
Copy link
Author

Hi @yonatankahana Thanks for the reply. Here is the output:

$ kubectl -n default logs nfs-subdir-external-provisioner-797d858c5c-zvrvb
I0502 16:20:10.011875       1 leaderelection.go:242] attempting to acquire leader lease  default/cluster.local-nfs-subdir-external-provisioner...
I0502 16:20:27.650523       1 leaderelection.go:252] successfully acquired lease default/cluster.local-nfs-subdir-external-provisioner
I0502 16:20:27.650979       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"cluster.local-nfs-subdir-external-provisioner", UID:"aa63dba8-32ab-43c6-844d-c5c50ebfe797", APIVersion:"v1", ResourceVersion:"88190", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-subdir-external-provisioner-797d858c5c-zvrvb_de017c21-9964-4034-a2a1-2076e01263fa became leader
I0502 16:20:27.651436       1 controller.go:820] Starting provisioner controller cluster.local/nfs-subdir-external-provisioner_nfs-subdir-external-provisioner-797d858c5c-zvrvb_de017c21-9964-4034-a2a1-2076e01263fa!
I0502 16:20:27.752146       1 controller.go:869] Started provisioner controller cluster.local/nfs-subdir-external-provisioner_nfs-subdir-external-provisioner-797d858c5c-zvrvb_de017c21-9964-4034-a2a1-2076e01263fa!
$ kubectl -n default describe pvc test-claim
Name:          test-claim
Namespace:     default
StorageClass:  managed-nfs-storage
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       test-pod
Events:
  Type     Reason              Age                   From                         Message
  ----     ------              ----                  ----                         -------
  Warning  ProvisioningFailed  19h (x2601 over 30h)  persistentvolume-controller  storageclass.storage.k8s.io "managed-nfs-storage" not found
  Warning  ProvisioningFailed  4m43s (x42 over 14m)  persistentvolume-controller  storageclass.storage.k8s.io "managed-nfs-storage" not found

I am just a beginner when it comes to Kubernetes. It looks to me that it is expecting a different name. It's expecting managed-nfs-storage as defined in deploy/test-claim.yaml.

I have renamed the property storageClassName in deploy/test-claim.yaml to nfs-client and the test-pod was able to deploy and start successfully. The file SUCCESS got created.

@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 31, 2021
@fluxens
Copy link

fluxens commented Aug 18, 2021

The values.yaml contains name: nfs-client, but it should be name: managed-nfs-storage. As such using the helm chart for installation and trying the test files will always fail.

@fluxens
Copy link

fluxens commented Aug 18, 2021

/remove-lifecycle stale.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 17, 2021
@rageshkrishna
Copy link

rageshkrishna commented Sep 29, 2021

Just ran into this myself. Changing managed-nfs-storage in the test-claim.yaml to nfs-client got it to work.

I'm not sure if this is supposed to be fixed in the chart, the test file, or the readme, but it definitely makes the first-run experience less than ideal.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

hyakutem pushed a commit to hyakutem/nfs-subdir-external-provisioner that referenced this issue Feb 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants