Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zpool create error: one or more vdevs refer to the same device #573

Closed
jfroche opened this issue Mar 19, 2024 · 1 comment · Fixed by #574
Closed

zpool create error: one or more vdevs refer to the same device #573

jfroche opened this issue Mar 19, 2024 · 1 comment · Fixed by #574

Comments

@jfroche
Copy link

jfroche commented Mar 19, 2024

Since #568 I face an error when using 1 disk with 1 ZFS pool on AWS:

zpool create -f zroot -R /mnt -O com.sun:auto-snapshot=false -O compression=lz4 /dev/nvme0n1p3 /dev/nvme0n1p3
cannot create 'zroot': one or more vdevs refer to the same device, or one of the devices is part of an active md or lvm device

Reverting to a revision before that change fixes it.

I have one disk on that machine. I don't see why

readarray -t zfs_devices < <(cat "$disko_devices_dir"/zfs_${config.name})
returns an array with multiple devices.

$ lsblk -d
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0    0  20G  0 disk

The actual configuration:

 {
    disk = {
      x = {
        type = "disk";
        device = "/dev/nvme0n1";
        content = {
          type = "table";
          format = "gpt";
          partitions = [
            {
              name = "grub";
              start = "0";
              end = "1M";
              part-type = "primary";
              flags = [ "bios_grub" ];
            }
            {
              name = "ESP";
              start = "1M";
              end = "512MiB";
              fs-type = "fat32";
              bootable = true;
              content = {
                type = "filesystem";
                format = "vfat";
                mountpoint = "/boot";
              };
            }
            {
              name = "zfs";
              start = "512MiB";
              end = "100%";
              content = {
                type = "zfs";
                pool = "zroot";
              };
            }
          ];
        };
      };
    };
    zpool = {
      zroot = {
        type = "zpool";
        rootFsOptions = {
          compression = "lz4";
          "com.sun:auto-snapshot" = "false";
        };
        datasets = {
          "root" = {
            type = "zfs_fs";
            options.mountpoint = "none";
            mountpoint = null;
          };
          "root/nixos" = {
            type = "zfs_fs";
            options.mountpoint = "/";
            mountpoint = "/";
          };
        };
      };
    };
  };
Complete log
+ step Formatting hard drive with disko
+ echo '### Formatting hard drive with disko ###'
### Formatting hard drive with disko ###
+ ssh_ /nix/store/23pyax6gh00fkny81l20b3rhhj2z1n4b-disko
+ ssh -T -i /tmp/tmp.km0Gvxg7v7/nixos-anywhere -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@XXXXXXXX /nix/store/23pyax6gh00fkny81l20b3rhhj2z1n4b-disko
Warning: Permanently added 'XXXXXXXX' (ED25519) to the list of known hosts.
umount: /mnt: not mounted
++ realpath /dev/nvme0n1
+ disk=/dev/nvme0n1
+ lsblk -a -f
NAME         FSTYPE   FSVER LABEL           UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
loop0        squashfs 4.0                                                              0   100% /nix/.ro-store
loop1
loop2
loop3
loop4
loop5
loop6
loop7
nvme0n1
├─nvme0n1p1  ext4     1.0   cloudimg-rootfs b7e04319-1e31-407a-951f-70d745d7e894
├─nvme0n1p14
└─nvme0n1p15 vfat     FAT32 UEFI            4644-0475
+ lsblk --output-all --json
+ bash -x
++ dirname /nix/store/1r0lisvg8zbri6byz4lsybhh24y8yn1n-disk-deactivate/disk-deactivate
+ jq -r --arg disk_to_clear /dev/nvme0n1 -f /nix/store/1r0lisvg8zbri6byz4lsybhh24y8yn1n-disk-deactivate/disk-deactivate.jq
+ set -fu
+ wipefs --all -f /dev/nvme0n1p1
/dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
++ zdb -l /dev/nvme0n1p14
++ sed -nr 's/ +name: '\''(.*)'\''/\1/p'
+ zpool=
+ [[ -n '' ]]
+ unset zpool
+ wipefs --all -f /dev/nvme0n1p14
+ wipefs --all -f /dev/nvme0n1p15
/dev/nvme0n1p15: 8 bytes were erased at offset 0x00000052 (vfat): 46 41 54 33 32 20 20 20
/dev/nvme0n1p15: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/nvme0n1p15: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa
++ zdb -l /dev/nvme0n1
++ sed -nr 's/ +name: '\''(.*)'\''/\1/p'
+ zpool=
+ [[ -n '' ]]
+ unset zpool
++ lsblk /dev/nvme0n1 -l -p -o type,name
++ awk 'match($1,"raid.*") {print $2}'
+ md_dev=
+ [[ -n '' ]]
+ wipefs --all -f /dev/nvme0n1
/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/nvme0n1: 8 bytes were erased at offset 0x4fffffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
+ dd if=/dev/zero of=/dev/nvme0n1 bs=440 count=1
1+0 records in
1+0 records out
440 bytes copied, 0.00311681 s, 141 kB/s
+ lsblk -a -f
NAME    FSTYPE   FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0   squashfs 4.0                    0   100% /nix/.ro-store
loop1
loop2
loop3
loop4
loop5
loop6
loop7
nvme0n1
++ mktemp -d
+ disko_devices_dir=/tmp/tmp.Tqm27AT9Tn
+ trap 'rm -rf "$disko_devices_dir"' EXIT
+ mkdir -p /tmp/tmp.Tqm27AT9Tn
+ device=/dev/nvme0n1
+ imageSize=2G
+ name=x
+ type=disk
+ device=/dev/nvme0n1
+ format=gpt
+ type=table
+ blkid /dev/nvme0n1
+ parted -s /dev/nvme0n1 -- mklabel gpt
+ parted -s /dev/nvme0n1 -- mkpart grub 0 1M
Warning: The resulting partition is not properly aligned for best performance: 34s % 2048s != 0s
+ partprobe /dev/nvme0n1
+ udevadm trigger --subsystem-match=block
+ udevadm settle
+ parted -s /dev/nvme0n1 -- set 1 bios_grub on
+ partprobe /dev/nvme0n1
+ udevadm trigger --subsystem-match=block
+ udevadm settle
+ parted -s /dev/nvme0n1 -- mkpart ESP fat32 1M 512MiB
+ partprobe /dev/nvme0n1
+ udevadm trigger --subsystem-match=block
+ udevadm settle
+ parted -s /dev/nvme0n1 -- set 2 boot on
+ partprobe /dev/nvme0n1
+ udevadm trigger --subsystem-match=block
+ udevadm settle
+ device=/dev/nvme0n1p2
+ extraArgs=()
+ declare -a extraArgs
+ format=vfat
+ mountOptions=('defaults')
+ declare -a mountOptions
+ mountpoint=/boot
+ type=filesystem
+ blkid /dev/nvme0n1p2
+ grep -q TYPE=
+ mkfs.vfat /dev/nvme0n1p2
mkfs.fat 4.2 (2021-01-31)
+ parted -s /dev/nvme0n1 -- mkpart zfs 512MiB 100%
+ partprobe /dev/nvme0n1
+ udevadm trigger --subsystem-match=block
+ udevadm settle
+ partprobe /dev/nvme0n1
+ udevadm trigger --subsystem-match=block
+ udevadm settle
+ device=/dev/nvme0n1p3
+ pool=zroot
+ type=zfs
+ echo /dev/nvme0n1p3
+ device=/dev/nvme0n1p2
+ extraArgs=()
+ declare -a extraArgs
+ format=vfat
+ mountOptions=('defaults')
+ declare -a mountOptions
+ mountpoint=/boot
+ type=filesystem
+ blkid /dev/nvme0n1p2
+ grep -q TYPE=
+ device=/dev/nvme0n1p3
+ pool=zroot
+ type=zfs
+ echo /dev/nvme0n1p3
+ mode=
+ mountOptions=('defaults')
+ declare -a mountOptions
+ mountpoint=
+ name=zroot
+ options=()
+ declare -A options
+ rootFsOptions=(['com.sun:auto-snapshot']='false' ['compression']='lz4')
+ declare -A rootFsOptions
+ type=zpool
+ readarray -t zfs_devices
++ cat /tmp/tmp.Tqm27AT9Tn/zfs_zroot
+ zpool list zroot
cannot open 'zroot': no such pool
+ continue=1
+ for dev in "${zfs_devices[@]}"
+ blkid /dev/nvme0n1p3
+ blkid /dev/nvme0n1p3 -o export
+ grep '^PTUUID='
+ blkid /dev/nvme0n1p3 -o export
+ grep '^TYPE='
+ for dev in "${zfs_devices[@]}"
+ blkid /dev/nvme0n1p3
+ blkid /dev/nvme0n1p3 -o export
+ grep '^PTUUID='
+ blkid /dev/nvme0n1p3 -o export
+ grep '^TYPE='
+ '[' 1 -eq 1 ']'
+ zpool create -f zroot -R /mnt -O com.sun:auto-snapshot=false -O compression=lz4 /dev/nvme0n1p3 /dev/nvme0n1p3
cannot create 'zroot': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device
+ rm -rf /tmp/tmp.Tqm27AT9Tn
+ rm -rf /tmp/tmp.km0Gvxg7v7
+ cleanup
+ rm -rf /tmp/tmp.p1MfQY1a2m
@Lassulus
Copy link
Collaborator

ah yeah, I actually run the create step 2 times in the table type, this is fixed by #574

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants