Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xfs fsType not supported #56

Closed
dewongway opened this issue Jan 20, 2023 · 8 comments · Fixed by #59
Closed

xfs fsType not supported #56

dewongway opened this issue Jan 20, 2023 · 8 comments · Fixed by #59
Assignees

Comments

@dewongway
Copy link

Describe the bug
Failed to mount a PV in a test pod when fsType is xfs. Not sure if it is a bug or just not supported at this time.

To Reproduce
Steps to reproduce the behavior:

Create a StorageClass. Set fsType to xfs

apiVersion: storage.k8s.io/v1
kind: StorageClass
provisioner: csi-exos-x.seagate.com # Check pkg/driver.go, Required for the plugin to recognize this storage class as handled by itself.
volumeBindingMode: WaitForFirstConsumer # Prefer this value to avoid unschedulable pods (https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode)
allowVolumeExpansion: true
metadata:
  name: me5-2-storageclass # Choose the name that fits the best with your StorageClass.
parameters:
  # Secrets name and namespace, they can be the same for provisioner, controller-publish and controller-expand sections.
  csi.storage.k8s.io/provisioner-secret-name: seagate-exos-x-csi-secrets
  csi.storage.k8s.io/provisioner-secret-namespace: seagate
  csi.storage.k8s.io/controller-publish-secret-name: seagate-exos-x-csi-secrets
  csi.storage.k8s.io/controller-publish-secret-namespace: seagate
  csi.storage.k8s.io/controller-expand-secret-name: seagate-exos-x-csi-secrets
  csi.storage.k8s.io/controller-expand-secret-namespace: seagate
  fsType: xfs # Desired filesystem
  pool: A # Pool to use on the IQN to provision volumes
  volPrefix: csi # Desired prefix for volume naming, an underscore is appended
  storageProtocol: iscsi # The storage interface (iscsi, fc, sas) being used for storage i/o

Create a test pod using the example provided from the example directory, which created a PV and a PVC. Both PV and PVC were created successfully.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: me5-2-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: me5-2-storageclass
  resources:
    requests:
      storage: 20Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - image: ghcr.io/seagate/seagate-exos-x-testapp
    command: ["/bin/sh", "-c", "while sleep 60; do echo hello > /vol/test && ls -l /vol && cat /vol/test && rm /vol/test; done"]
    name: test-pod-container
    volumeMounts:
    - mountPath: /vol
      name: me5-vol1
    ports:
    - containerPort: 8080
  volumes:
  - name: me5-vol1
    persistentVolumeClaim:
      claimName: me5-2-pvc

The test-pod failed to mount the PV.

$ kubectl describe pod test-pod -n test-seagate 
Name:             test-pod
Namespace:        test-seagate
Priority:         0
Service Account:  default
Node:             test-cluster-pool1-7404b9af-tbftk
Start Time:       Thu, 19 Jan 2023 20:36:59 -0600
Labels:           objectset.rio.cattle.io/hash=f91b03139c70646de591b3433240629349ebfa53
Annotations:      kubernetes.io/psp: global-unrestricted-psp
                  objectset.rio.cattle.io/applied:
                    H4sIAAAAAAAA/3xSy47bMAz8FYNnK37IcWL32HOLnPayzYGWmVitXrAUd4GF/72gN7tBW7QXWxrOkBxSr4BBP9EctXfQw1JBDj+0G6GHkx8hB0sJR0wI/Sugcz5h0t5FvvrhO6kUKe...
                  objectset.rio.cattle.io/id: 21cefbde-cadf-4263-b072-3c1968761444
Status:           Pending
IP:               
IPs:              <none>
Containers:
  test-pod-container:
    Container ID:  
    Image:         ghcr.io/seagate/seagate-exos-x-testapp
    Image ID:      
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
      -c
      while sleep 60; do echo hello > /vol/test && ls -l /vol && cat /vol/test && rm /vol/test; done
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-th9d6 (ro)
      /vol from me5-vol1 (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  me5-vol1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  me5-2-pvc
    ReadOnly:   false
  kube-api-access-th9d6:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Normal   Scheduled               86s                default-scheduler        Successfully assigned test-seagate/test-pod to test-cluster-pool1-7404b9af-tbftk
  Normal   SuccessfulAttachVolume  83s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-6e585771-9c5f-4494-8eb8-a49846415dbe"
  Warning  FailedMount             12s (x8 over 77s)  kubelet                  MountVolume.SetUp failed for volume "pvc-6e585771-9c5f-4494-8eb8-a49846415dbe" : rpc error: code = DataLoss desc = (publish) filesystem (/dev/dm-0) seems to be corrupted: e2fsck 1.43.8 (1-Jan-2018)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/dm-0

The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

/dev/dm-0 contains a xfs file system



Expected behavior
I expect xfs is a valid fsType.

Screenshots
If applicable, add screenshots to help explain your problem.

Storage System (please complete the following information):

  • Vendor: Dell PowerVault ME5
  • Model:
  • Firmware Version: ME5.1.0.1.0

Environment:

  • Kubernetes version: v1.24.4+rke2r1
  • Host OS: SUSE Linux Enterprise Server 15 SP3

Additional context
Add any other context about the problem here.

@dewongway
Copy link
Author

The new version 1.5.6 doesn't seem to work even with ext4. I removed the old version and installed the 1.5.6. With version 1.5.6, I can't get pass the pvc creation. It got stuck the following "could not parse topology requirements" error:

$ kubectl describe pvc me5-ext4-pvc -n test-seagate
Name:          me5-ext4-pvc
Namespace:     test-seagate
StorageClass:  me5-ext4-storageclass
Status:        Pending
Volume:        
Labels:        objectset.rio.cattle.io/hash=dd85fa9f3b1348ec45664af00b0dd9889e63c570
Annotations:   objectset.rio.cattle.io/applied:
                 H4sIAAAAAAAA/3yPsW7jMAyG34WzlZNtObG9ZrjprkWHdCg60BKdqLUlV2SCAoHfvVDSqUM38v/ADz+vgIs/UGIfA/RwKaGAdx8c9PCYUxYKcojTeab9hH6GAmYSdCgI/RUwhCgoPg...
               objectset.rio.cattle.io/id: 8aff421a-2142-442e-b029-e4fc0b300074
               volume.beta.kubernetes.io/storage-provisioner: csi-exos-x.seagate.com
               volume.kubernetes.io/selected-node: test-cluster-pool1-7404b9af-tbftk
               volume.kubernetes.io/storage-provisioner: csi-exos-x.seagate.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       test-pod
Events:
  Type     Reason                Age                From                                                                                                               Message
  ----     ------                ----               ----                                                                                                               -------
  Normal   WaitForFirstConsumer  57s                persistentvolume-controller                                                                                        waiting for first consumer to be created before binding
  Normal   Provisioning          22s (x6 over 57s)  csi-exos-x.seagate.com_seagate-exos-x-csi-controller-server-7dbdd4746b-2wxdx_c52645ef-bd92-48d9-9f95-f24b2fcc77d8  External provisioner is provisioning volume for claim "test-seagate/me5-ext4-pvc"
  Warning  ProvisioningFailed    21s (x6 over 56s)  csi-exos-x.seagate.com_seagate-exos-x-csi-controller-server-7dbdd4746b-2wxdx_c52645ef-bd92-48d9-9f95-f24b2fcc77d8  failed to provision volume with StorageClass "me5-ext4-storageclass": rpc error: code = Unavailable desc = could not parse topology requirements
  Normal   ExternalProvisioning  9s (x6 over 57s)   persistentvolume-controller                                                                                        waiting for a volume to be created, either by external provisioner "csi-exos-x.seagate.com" or manually created by system administrator
apiVersion: storage.k8s.io/v1
kind: StorageClass
provisioner: csi-exos-x.seagate.com
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
metadata:
  name: me5-ext4-storageclass
parameters:
  csi.storage.k8s.io/provisioner-secret-name: seagate-exos-x-csi-secrets
  csi.storage.k8s.io/provisioner-secret-namespace: seagate
  csi.storage.k8s.io/controller-publish-secret-name: seagate-exos-x-csi-secrets
  csi.storage.k8s.io/controller-publish-secret-namespace: seagate
  csi.storage.k8s.io/controller-expand-secret-name: seagate-exos-x-csi-secrets
  csi.storage.k8s.io/controller-expand-secret-namespace: seagate
  fsType: ext4
  pool: A
  volPrefix: csi-me5
  storageProtocol: iscsi

@seagate-chris
Copy link
Collaborator

We broke iSCSI in the process of getting SAS topology support working. I'll try to get the fix out tomorrow.

@seagate-chris seagate-chris self-assigned this Jan 26, 2023
@dewongway
Copy link
Author

Hi Chris, just curious if you will have the fix this week. I'm also testing snapshots and I'm having an issue with the clone volume not mounting on the node automatically. I would rather test the snapshot again on the new release before creating a new issue.

Thanks

@seagate-chris
Copy link
Collaborator

I'm testing the fix today--if all goes well it should be published by tomorrow.

@dewongway
Copy link
Author

Thanks, Chris. Will test the new version this week.

@dewongway
Copy link
Author

Chris,

Release 1.5.7 still has the same issue "could not parse topology requirements".

I am also confused about the code downloaded. First I did a git clone https://github.com/Seagate/seagate-exos-x-csi.git and performed a helm upgrade. It recreated the controller but it still has v1.5.6 tag. Then I tried downloading the source directly from https://github.com/Seagate/seagate-exos-x-csi/releases/tag/v1.5.7. It also has the v1.5.6 tag after running the helm upgrade. In the CHANGELOG from both downloads, I do see the merge #60 and #56 so I think I got the right release.

git clone copy:
$egrep -i "#56|#60" not-seagate-exos-x-csi-1.5.7/CHANGELOG.md 
- Merge pull request #60 from Seagate/bug#56 ([204b559](https://github.com/Seagate/seagate-exos-x-csi/commit/204b559bdaf864b741a3f7e7c71da4412881cace)), closes [#60](https://github.com/Seagate/seagate-exos-x-csi/issues/60) [Seagate/bug#56](https://github.com/Seagate/bug/issues/56) [Bug#56](https://github.com/Bug/issues/56)
- fix "could not parse topology requirements" error for iSCSI targets (#56, HS-332) ([13c749f](https://github.com/Seagate/seagate-exos-x-csi/commit/13c749feae805d8881f97ce088b229944b5ff00b)), closes [#56](https://github.com/Seagate/seagate-exos-x-csi/issues/56)

zip download copy:
$ egrep -i "#56|#60" seagate-exos-x-csi-1.5.7/CHANGELOG.md 
- Merge pull request #60 from Seagate/bug#56 ([204b559](https://github.com/Seagate/seagate-exos-x-csi/commit/204b559bdaf864b741a3f7e7c71da4412881cace)), closes [#60](https://github.com/Seagate/seagate-exos-x-csi/issues/60) [Seagate/bug#56](https://github.com/Seagate/bug/issues/56) [Bug#56](https://github.com/Bug/issues/56)
- fix "could not parse topology requirements" error for iSCSI targets (#56, HS-332) ([13c749f](https://github.com/Seagate/seagate-exos-x-csi/commit/13c749feae805d8881f97ce088b229944b5ff00b)), closes [#56](https://github.com/Seagate/seagate-exos-x-csi/issues/56)

@seagate-chris
Copy link
Collaborator

I'm not sure why the helm chart wasn't updated automatically to specify v1.5.7, but you can override it by adding "--set image.tag=1.5.7" to the "helm install" command. I didn't catch this because I used that option that when I was testing.

@dewongway
Copy link
Author

"--set image.tag=1.5.7" failed pulling the image but "--set image.tag=v1.5.7" worked. I have verified that both ext4 and xfs types are working. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants