Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[9.4-stable] Add Support for Persistent OVMF Settings in Pillar #4276

Merged
merged 7 commits into from
Sep 24, 2024

Conversation

OhmSpectator
Copy link
Member

Backport of #4261.

A new version of #4269 , as the previous PR, targeted the 9.4-stable-EOS branch and prevented our workflows from being triggered (as the branch name had "-EOS" postfix).

@OhmSpectator
Copy link
Member Author

OhmSpectator commented Sep 23, 2024

The Eden tests are still failing without the fix lf-edge/eden@33ed9a7

So, we will test it with ztests. Waiting for the Build to be done.

@OhmSpectator
Copy link
Member Author

I don't see this workflow running
https://github.com/lf-edge/eve/blob/9.4-stable/.github/workflows/danger.yml
Strange.

@OhmSpectator
Copy link
Member Author

Maybe it's the problem. Found in the "PR build" workflow summary.

[Deprecation notice: v1, v2, and v3 of the artifact actions](https://github.com/lf-edge/eve/actions/runs/10990471810/workflow)
The following artifacts were uploaded using a version of actions/upload-artifact that is scheduled for deprecation: "eve-kvm-amd64", "eve-kvm-arm64", "eve-mini-riscv64", "eve-xen-amd64", "eve-xen-arm64".
Please update your workflow to use v4 of the artifact actions.
Learn more: https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/

@OhmSpectator
Copy link
Member Author

@uncleDecart, @milan-zededa, could you recommend a quick fix here? I need the danger workflow to run to publish the artefact to dockerhub so that tests can use them...

@uncleDecart
Copy link
Member

@OhmSpectator danger should've been triggered after successful PR build

@uncleDecart
Copy link
Member

Maybe it's the problem. Found in the "PR build" workflow summary.

[Deprecation notice: v1, v2, and v3 of the artifact actions](https://github.com/lf-edge/eve/actions/runs/10990471810/workflow)
The following artifacts were uploaded using a version of actions/upload-artifact that is scheduled for deprecation: "eve-kvm-amd64", "eve-kvm-arm64", "eve-mini-riscv64", "eve-xen-amd64", "eve-xen-arm64".
Please update your workflow to use v4 of the artifact actions.
Learn more: https://github.blog/changelog/2024-04-16-deprecation-notice-v3-of-the-artifact-actions/

They were still uploaded, you can see them in artifacts on summary page in PR/Build action here

@uncleDecart
Copy link
Member

And I don't see any workflows triggered here

@OhmSpectator
Copy link
Member Author

@uncleDecart yep. I need to understand how to push artifacts into docker hub, so they are available by docker pull in another tool (ztest).

@uncleDecart
Copy link
Member

@uncleDecart yep. I need to understand how to push artifacts into docker hub, so they are available by docker pull in another tool (ztest).

Quickest fix I think would be to locally run make eve which will push containers to your local setup and then running ztest on that same machine will pull local container. In order to push artefacts into docker hub we will need credentials, so it's not a way to go. I'll check now the wfs...

@OhmSpectator
Copy link
Member Author

@uncleDecart yep. I need to understand how to push artifacts into docker hub, so they are available by docker pull in another tool (ztest).

Quickest fix I think would be to locally run make eve which will push containers to your local setup and then running ztest on that same machine will pull local container. In order to push artefacts into docker hub we will need credentials, so it's not a way to go. I'll check now the wfs...

My understanding is that the danger workflow push into dockerhub. But it's not called for some reason

@OhmSpectator OhmSpectator force-pushed the 9.4_backport_fml.config branch from 775860a to a81a9cc Compare September 23, 2024 15:43
OhmSpectator and others added 7 commits September 23, 2024 19:19
This commit copies OVMF_VARS.fd into the pillar container by adding it
to /usr/lib/xen/boot/ovmf_vars.bin. It is important that the file is
available in the pillar container because Pillar will create per-domain
copies of it stored in /persist, which are then accessible to the
xen-tools container. This sets the groundwork for enabling virtual
machines to save and retain UEFI settings across reboots by using
per-domain NVRAM files.

Signed-off-by: Nikolay Martyanov <[email protected]>
Introduce support for managing per-domain OVMF_VARS.fd files, which are
essential for maintaining persistent UEFI settings for FML guests. It
adds functionality to prepare and clean up individual OVMF settings
files stored in the persist directory, ensuring that each virtual
machine has its own dedicated NVRAM file. The VM configuration
structures are updated to reference the bootloader settings file,
enabling the creation of unique UEFI variable stores for each domain.

Signed-off-by: Nikolay Martyanov <[email protected]>
Switch to using separate OVMF_CODE.fd and OVMF_VARS.fd files for FML x86
modes instead of a combined .bin file. This ensures that settings are
stored correctly and maintains consistent naming conventions. These
changes do not affect containers, ARM or Xen.

To support ARM the OVMF build should produce separate files. Currently
it produces QEMU_EFI that incorporates both code and variable sections.

Signed-off-by: Nikolay Martyanov <[email protected]>
Add a new global config value app.fml.resolution to set custom resolution
for FML apps. This is a string value in the format of "widthxheight".

This value can be set device-wide as a global config value or set
in a per-app/vm setting by defining it as top-level variable in
the cloud-config.

Signed-off-by: Shahriyar Jalayeri <[email protected]>
Implemented logic to select predefined OVMF_VARS.fd files for specific
screen resolutions. Added pre-saved OVMF settings for 800x600, 1024x768,
1280x800, and 1920x1080 resolutions, ensuring clean boot entries.

Signed-off-by: Nikolay Martyanov <[email protected]>
Adapt the existing tests to account for the addition of the OVMF
settings file for FML guests, ensuring the expected output includes the
BootLoader. It also fixes the ARM tests by ensuring the firmware line is
present in the expected configuration, aligning the tests with the
updated behavior for UEFI boot on both AMD64 and ARM architectures.

Signed-off-by: Nikolay Martyanov <[email protected]>
In the FIRMWARE doc, add a new section explaining what OVMF is, when
it's used in EVE, the key OVMF files, how settings are managed, and
future automation plans.

Signed-off-by: Nikolay Martyanov <[email protected]>
@OhmSpectator OhmSpectator force-pushed the 9.4_backport_fml.config branch from 8f2aef9 to fe38741 Compare September 23, 2024 17:20
@OhmSpectator
Copy link
Member Author

It's just a matter of fact. The workflows are broken in 9.4. We cannot do our regular releases here.

@OhmSpectator
Copy link
Member Author

@yash-zededa, could you help with this? We need assistance with GH Actions for the upcoming release after merging this PR. At the moment it looks like it will not work.

Copy link
Contributor

@eriknordmark eriknordmark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually ran ztests on two devices overnight as a regression test since the workflows are not working well. Passed.

@eriknordmark eriknordmark merged commit ffa00bc into lf-edge:9.4-stable Sep 24, 2024
11 of 15 checks passed
@yash-zededa
Copy link
Collaborator

@eriknordmark @OhmSpectator Are we expecting more releases for the 9.4 branch ? We can we can modify the Jenkins job to get the artefacts directly from github. I don't think it would be a direct approach but maybe we can figure out a way for this.

@OhmSpectator
Copy link
Member Author

@eriknordmark @OhmSpectator Are we expecting more releases for the 9.4 branch ? We can we can modify the Jenkins job to get the artefacts directly from github. I don't think it would be a direct approach but maybe we can figure out a way for this.

We need to create only one release. Exactly from the state now. Then we move it to EOS again. That's my understanding.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants