Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix getting lvm recovery #70

Merged
merged 5 commits into from
Jun 23, 2023
Merged

Fix getting lvm recovery #70

merged 5 commits into from
Jun 23, 2023

Conversation

Itxaka
Copy link
Member

@Itxaka Itxaka commented Jun 23, 2023

We already have an function that will use ghw+udev data to identify partition which are under a devicemapper, so we can use that to gather the info about the lvm partitions already without adding more complexity

This also reverts the lsblk introduction

@Itxaka Itxaka requested a review from a team June 23, 2023 10:16
@codecov-commenter
Copy link

codecov-commenter commented Jun 23, 2023

Codecov Report

Merging #70 (3ce69ce) into main (f051fab) will decrease coverage by 0.95%.
The diff coverage is 23.58%.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

@@            Coverage Diff             @@
##             main      #70      +/-   ##
==========================================
- Coverage   57.42%   56.48%   -0.95%     
==========================================
  Files          40       40              
  Lines        4792     4835      +43     
==========================================
- Hits         2752     2731      -21     
- Misses       1823     1888      +65     
+ Partials      217      216       -1     
Impacted Files Coverage Δ
pkg/utils/getpartitions.go 24.42% <18.18%> (-33.02%) ⬇️
pkg/elementalConfig/config.go 57.38% <46.66%> (+3.27%) ⬆️
pkg/utils/common.go 79.00% <50.00%> (-0.83%) ⬇️
pkg/cloudinit/layout_plugin.go 86.56% <100.00%> (ø)

... and 1 file with indirect coverage changes

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Member

@mudler mudler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking good here, good catch also to use ghw directly!

@Itxaka Itxaka requested a review from jimmykarily June 23, 2023 10:31
@Itxaka
Copy link
Member Author

Itxaka commented Jun 23, 2023

Im worried about the disk entry as we cannot infer from udev what disk this comes from and @jimmykarily mentioned that it was important :/

@Itxaka
Copy link
Member Author

Itxaka commented Jun 23, 2023

New fix for it to get the disk, only its missing is the size and mountpoint:

with OEM as lvm for testing:

    OEM: &v1.Partition{
      Name: "oem",
      FilesystemLabel: "COS_OEM",
      Size: 0,
      FS: "ext4",
      Flags: nil,
      MountPoint: "",
      Path: "/dev/disk/by-label/COS_OEM",
      Disk: "/dev/vda2",
    },
    Recovery: &v1.Partition{
      Name: "vda3",
      FilesystemLabel: "COS_RECOVERY",
      Size: 8192,
      FS: "ext4",
      Flags: nil,
      MountPoint: "/run/initramfs/cos-state",
      Path: "/dev/vda3",
      Disk: "/dev/vda",
    },

@jimmykarily
Copy link
Contributor

Disk: "/dev/vda2", doesn't look correct. It should have been /dev/vda no?

@@ -143,7 +143,7 @@ func (e Elemental) UnmountPartitions(parts v1.PartitionList) error {
if part.MountPoint != "" {
err = e.UnmountPartition(part)
if err != nil {
errMsg += fmt.Sprintf("Failed to unmount %s\n Error: %s\n", part.MountPoint, err.Error())
errMsg += fmt.Sprintf("Failed to unmount %s\n", part.MountPoint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep this change

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, adding it

@jimmykarily
Copy link
Contributor

Good catch! I'm sorry I missed this, it would have saved us quite some time...

@Itxaka
Copy link
Member Author

Itxaka commented Jun 23, 2023

Disk: "/dev/vda2", doesn't look correct. It should have been /dev/vda no?

Oh crap, yes! Reworking it...

@Itxaka
Copy link
Member Author

Itxaka commented Jun 23, 2023

now it properly gets the size and parent disk

    OEM: &v1.Partition{
      Name: "oem",
      FilesystemLabel: "COS_OEM",
      Size: 62914560,
      FS: "ext4",
      Flags: nil,
      MountPoint: "",
      Path: "/dev/disk/by-label/COS_OEM",
      Disk: "/dev/vda",
    },

@Itxaka
Copy link
Member Author

Itxaka commented Jun 23, 2023

Now its probably missing the mountpoints 😭

Comment on lines +167 to +172
if strings.HasPrefix(udevLine, "E:") {
if s := strings.SplitN(udevLine[2:], "=", 2); len(s) == 2 {
udevInfo[s[0]] = s[1]
continue
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any docs on what E: prefix is? Not super important but it would make this code a bit less magical.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mainly because we are accessing the databsae directly without using udevadm and you can use udevadm to only show some types of values, like only device paths or symlinks. But we dont have udevadm in our systems so we need to filter them ourselves.

@Itxaka Itxaka merged commit 9dd1dbd into main Jun 23, 2023
@Itxaka Itxaka deleted the fix_recovery_lvm branch June 23, 2023 12:49
@jimmykarily
Copy link
Contributor

I compiled a kairos-agent binary out of this branch, booted into recovery on a rasberry pi (with lvm partitions), scpied the binary and run kairos-agent --debug reset. If failed:

WARNING: jsonschema: '' does not validate with file:///home/kairos/schema.json#/required: missing properties: 'users'
INFO[2023-06-23T13:58:17Z] kairos-agent version v0.0.1                  
DEBU[2023-06-23T13:58:17Z] Full config loaded: &v1.RunConfig{
  Debug: true,
  Strict: false,
  Reboot: false,
  PowerOff: false,
  CloudInitPaths: []string{
    "/run/initramfs/cos-state",
  },
  EjectCD: false,
  Config: v1.Config{
    Logger: &v1.logrusWrapper{ // p0
      Logger: &logrus.Logger{
        Out: &os.File{},
        Hooks: logrus.LevelHooks{},
        Formatter: &logrus.TextFormatter{
          ForceColors: true,
          DisableColors: false,
          ForceQuote: false,
          DisableQuote: false,
          EnvironmentOverrideColors: false,
          DisableTimestamp: false,
          FullTimestamp: true,
          TimestampFormat: "",
          DisableSorting: false,
          SortingFunc: ,
          DisableLevelTruncation: false,
          PadLevelText: false,
          QuoteEmptyFields: false,
          FieldMap: logrus.FieldMap(nil),
          CallerPrettyfier: ,
        },
        ReportCaller: false,
        Level: 5,
        ExitFunc: os.Exit,
        BufferPool: nil,
      },
    },
    Fs: &vfs.osfs{}, // p1
    Mounter: &mount.Mounter{},
    Runner: &v1.RealRunner{ // p2
      Logger: p0,
    },
    Syscall: &v1.RealSyscall{},
    CloudInitRunner: &cloudinit.YipCloudInitRunner{},
    ImageExtractor: v1.OCIImageExtractor{},
    Client: &http.Client{},
    Platform: &v1.Platform{
      OS: "linux",
      Arch: "arm64",
      GolangArch: "arm64",
    },
    Cosign: false,
    Verify: false,
    CosignPubKey: "",
    Repos: []v1.Repository{},
    Arch: "arm64",
    SquashFsCompressionConfig: []string{
      "-comp",
      "gzip",
    },
    SquashFsNoCompression: false,
  },
} 
DEBU[2023-06-23T13:58:17Z] Full config: &v1.RunConfig{
  Debug: true,
  Strict: false,
  Reboot: false,
  PowerOff: false,
  CloudInitPaths: []string{
    "/run/initramfs/cos-state",
  },
  EjectCD: false,
  Config: v1.Config{
    Logger: &v1.logrusWrapper{ // p0
      Logger: &logrus.Logger{
        Out: &os.File{},
        Hooks: logrus.LevelHooks{},
        Formatter: &logrus.TextFormatter{
          ForceColors: true,
          DisableColors: false,
          ForceQuote: false,
          DisableQuote: false,
          EnvironmentOverrideColors: false,
          DisableTimestamp: false,
          FullTimestamp: true,
          TimestampFormat: "",
          DisableSorting: false,
          SortingFunc: ,
          DisableLevelTruncation: false,
          PadLevelText: false,
          QuoteEmptyFields: false,
          FieldMap: logrus.FieldMap(nil),
          CallerPrettyfier: ,
        },
        ReportCaller: false,
        Level: 5,
        ExitFunc: os.Exit,
        BufferPool: nil,
      },
    },
    Fs: &vfs.osfs{}, // p1
    Mounter: &mount.Mounter{},
    Runner: &v1.RealRunner{ // p2
      Logger: p0,
    },
    Syscall: &v1.RealSyscall{},
    CloudInitRunner: &cloudinit.YipCloudInitRunner{},
    ImageExtractor: v1.OCIImageExtractor{},
    Client: &http.Client{},
    Platform: &v1.Platform{
      OS: "linux",
      Arch: "arm64",
      GolangArch: "arm64",
    },
    Cosign: false,
    Verify: false,
    CosignPubKey: "",
    Repos: []v1.Repository{},
    Arch: "arm64",
    SquashFsCompressionConfig: []string{
      "-comp",
      "gzip",
    },
    SquashFsNoCompression: false,
  },
} 
DEBU[2023-06-23T13:58:17Z] Running cmd: 'cat /proc/cmdline'             
DEBU[2023-06-23T13:58:17Z] Running cmd: 'cat /proc/cmdline'             
179:3

Reading udevdata of device: /run/udev/data/b179:0
179:3

Reading udevdata of device: /run/udev/data/b179:0
DEBU[2023-06-23T13:58:17Z] Loaded reset spec: &v1.ResetSpec{
  FormatPersistent: false,
  FormatOEM: false,
  GrubDefEntry: "Kairos",
  Tty: "tty1",
  Active: v1.Image{
    File: "/run/cos/state/cOS/active.img",
    Label: "COS_ACTIVE",
    Size: 3000,
    FS: "ext2",
    Source: &v1.ImageSource{},
    MountPoint: "/run/cos/active",
    LoopDevice: "",
  },
  Passive: v1.Image{
    File: "/run/cos/state/cOS/passive.img",
    Label: "COS_PASSIVE",
    Size: 0,
    FS: "ext2",
    Source: &v1.ImageSource{},
    MountPoint: "",
    LoopDevice: "",
  },
  Partitions: v1.ElementalPartitions{
    BIOS: nil,
    EFI: &v1.Partition{
      Name: "efi",
      FilesystemLabel: "COS_GRUB",
      Size: 96,
      FS: "vfat",
      Flags: nil,
      MountPoint: "/run/cos/efi",
      Path: "/dev/mmcblk0p1",
      Disk: "/dev/mmcblk0",
    },
    OEM: &v1.Partition{
      Name: "oem",
      FilesystemLabel: "COS_OEM",
      Size: 67108864,
      FS: "ext4",
      Flags: nil,
      MountPoint: "",
      Path: "/dev/disk/by-label/COS_OEM",
      Disk: "/dev/mmcblk0",
    },
    Recovery: &v1.Partition{
      Name: "recovery",
      FilesystemLabel: "COS_RECOVERY",
      Size: 4399824896,
      FS: "ext4",
      Flags: nil,
      MountPoint: "/run/cos/recovery",
      Path: "/dev/disk/by-label/COS_RECOVERY",
      Disk: "/dev/mmcblk0",
    },
    State: &v1.Partition{
      Name: "state",
      FilesystemLabel: "COS_STATE",
      Size: 6200,
      FS: "ext4",
      Flags: nil,
      MountPoint: "/run/cos/state",
      Path: "/dev/mmcblk0p2",
      Disk: "/dev/mmcblk0",
    },
    Persistent: &v1.Partition{
      Name: "persistent",
      FilesystemLabel: "COS_PERSISTENT",
      Size: 64,
      FS: "ext4",
      Flags: nil,
      MountPoint: "/run/cos/persistent",
      Path: "/dev/mmcblk0p4",
      Disk: "/dev/mmcblk0",
    },
  },
  Target: "/dev/mmcblk0",
  Efi: true,
  GrubConf: "/etc/cos/grub.cfg",
  State: &v1.InstallState{
    Date: "2023-06-15T07:29:50Z",
    Partitions: map[string]*v1.PartitionState{
      "oem": &v1.PartitionState{
        FSLabel: "COS_OEM",
        Images: map[string]*v1.ImageState(nil), // p0
      },
      "persistent": &v1.PartitionState{
        FSLabel: "COS_PERSISTENT",
        Images: p0,
      },
      "recovery": nil,
      "state": &v1.PartitionState{
        FSLabel: "COS_STATE",
        Images: map[string]*v1.ImageState{
          "active": &v1.ImageState{
            Source: &v1.ImageSource{},
            SourceMetadata: nil,
            Label: "COS_ACTIVE",
            FS: "ext2",
          },
          "passive": &v1.ImageState{
            Source: &v1.ImageSource{},
            SourceMetadata: nil,
            Label: "COS_PASSIVE",
            FS: "ext2",
          },
        },
      },
    },
  },
} 
INFO[2023-06-23T13:58:17Z] Unmounting disk partitions                   
DEBU[2023-06-23T13:58:17Z] Not unmounting partition, /run/cos/state doesn't look like mountpoint 
DEBU[2023-06-23T13:58:17Z] Not unmounting partition, /run/cos/persistent doesn't look like mountpoint 
DEBU[2023-06-23T13:58:17Z] Not unmounting partition, /run/cos/efi doesn't look like mountpoint 
INFO[2023-06-23T13:58:17Z] Formatting 'state' partition                 
DEBU[2023-06-23T13:58:17Z] Running cmd: 'mkfs.ext4 -L COS_STATE /dev/mmcblk0p2' 
1 error occurred:
        * exec: "mkfs.ext4": executable file not found in $PATH

I tried to simply call GetAllPartitions() print the partitions and exit and here is what I get:

part = {Name:mmcblk0p1 FilesystemLabel:COS_GRUB Size:96 FS:vfat Flags:[] MountPoint: Path:/dev/mmcblk0p1 Disk:/dev/mmcblk0}
part = {Name:mmcblk0p2 FilesystemLabel:COS_STATE Size:6200 FS:ext4 Flags:[] MountPoint: Path:/dev/mmcblk0p2 Disk:/dev/mmcblk0}
part = {Name:mmcblk0p3 FilesystemLabel:unknown Size:4264 FS:LVM2_member Flags:[] MountPoint: Path:/dev/mmcblk0p3 Disk:/dev/mmcblk0}
part = {Name:mmcblk0p4 FilesystemLabel:COS_PERSISTENT Size:64 FS:ext4 Flags:[] MountPoint: Path:/dev/mmcblk0p4 Disk:/dev/mmcblk0}

This is what lsblk shows:

kairos@cos-recovery:~> lsblk
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0                   7:0    0    2G  1 loop /
mmcblk0               179:0    0 59.7G  0 disk 
├─mmcblk0p1           179:1    0   96M  0 part 
├─mmcblk0p2           179:2    0  6.1G  0 part 
├─mmcblk0p3           179:3    0  4.2G  0 part 
│ ├─KairosVG-oem      254:0    0   64M  0 lvm  /oem
│ └─KairosVG-recovery 254:1    0  4.1G  0 lvm  /run/initramfs/cos-state
└─mmcblk0p4           179:4    0   64M  0 part 

For some reason, it doesn't return the lvm partitions

@jimmykarily
Copy link
Contributor

Never mind. GetAllPartitions is not the one finding the lvm ones. That was the previous implementation. It seems that there is a different issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants