Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAC address conflict when restoring a virtual machine to alternate namespace #199

Closed
esteban-ee opened this issue Nov 15, 2023 · 16 comments · Fixed by #304
Closed

MAC address conflict when restoring a virtual machine to alternate namespace #199

esteban-ee opened this issue Nov 15, 2023 · 16 comments · Fixed by #304

Comments

@esteban-ee
Copy link

esteban-ee commented Nov 15, 2023

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind enhancement
/remove-lifecycle rotten

What happened:

When a virtual machine is restored to an alternate namespace, the virtual machine is restored with the same MAC address as the original virtual machine. This results in a MAC address conflict if the original virtual machine is still running on the original namespace.

What you expected to happen:

Provide a way to blank the MAC address on restore.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

The issue was resolved by updating the plugin to clear the MAC addresses on the restore item action

Environment:

  • CDI version (use kubectl get deployments cdi-deployment -o yaml):
  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • Install tools:
  • Others:
@kubevirt-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubevirt-bot kubevirt-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2024
@kubevirt-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

@kubevirt-bot kubevirt-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 14, 2024
@kubevirt-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

@kubevirt-bot
Copy link

@kubevirt-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mhenriks
Copy link
Member

mhenriks commented Nov 7, 2024

/reopen

@kubevirt-bot kubevirt-bot reopened this Nov 7, 2024
@kubevirt-bot
Copy link

@mhenriks: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@mhenriks
Copy link
Member

mhenriks commented Nov 7, 2024

/remove-lifecycle rotten

@kubevirt-bot kubevirt-bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 7, 2024
@mhenriks
Copy link
Member

mhenriks commented Nov 7, 2024

cc kubevirt/kubevirt#11113

@30787
Copy link

30787 commented Nov 9, 2024

@mhenriks @alromeros This will be a critical enhancement to support restore to alternate namespace. Current understanding is that kubevirt-velero-plugin skips VMI restore if VMI is owned by a VM. In the vm we deployed on kubevirt ocp cluster, I see that macaddress is in VM spec and VMI had the firmware uuid.
For restore of vm-1 to new namespace ns-2, velero restore might fail since there is original vm-1 in ns-1 using macaddrress-1.
Could we expect kubevirt-velero-plugin to remove macaddress from vm spec in vmbackupitemaction or vmrestoreitemaction?
Since kubevirt-velero-plugin skips VMI if it's owned by vm, will restoring a vm, provision a vmi with new firmware uuid ?

@mhenriks
Copy link
Member

mhenriks commented Nov 9, 2024

@30787

Could we expect kubevirt-velero-plugin to remove macaddress from vm spec in vmbackupitemaction or vmrestoreitemaction?

I think default behavior should be to preserve the mac in case moving the VM to another namespace but we can support a label on the Restore resource to clear the mac. Does that work for you?

Since kubevirt-velero-plugin skips VMI if it's owned by vm, will restoring a vm, provision a vmi with new firmware uuid ?

If firmware UUID is not specified in the VM, it is calculated by hashing the VM name. I think that hashing the VM UID (or namespace+name) would be better but I'm not sure this is something we can change at this point [1].

We could also support generating a unique firmware id at restore time if that is important to you

[1] kubevirt/kubevirt#12885 (comment)

@mhenriks
Copy link
Member

mhenriks commented Nov 9, 2024

cc @alromeros ^

@alromeros
Copy link
Contributor

Hey @mhenriks, I agree with the proposed implementation. @30787, we can get quite flexible with how we handle VM backups and restores, as long as the new behavior remains optional and we manage it through labels or annotations on the backup and restore objects. I'm happy to work on implementing this if you're good with these details.

@30787
Copy link

30787 commented Nov 11, 2024

@mhenriks @alromeros Thank you. Looking forward to this enhancement.

@30787
Copy link

30787 commented Dec 4, 2024

@alromeros @mhenriks Can you please confirm if this change would be available with the kubevirt-velero-plugin image that would be part of Jan release of oadp 1.4.2 or earlier.

@alromeros
Copy link
Contributor

alromeros commented Dec 5, 2024

Hey @30787, so afaik @ShellyKa13 is waiting for velero to merge a go mod fix so that we can bump to the new version and do a release. The feature will hopefully be ready for next week, so if everything goes as expected the change will be available on the Jan release of OADP.

@ShellyKa13
Copy link
Contributor

Hey the merge was done I have the PR for the bump but seems like there are some failures with the new 1.15 velero version which Im looking into.. But anyways for 1.4.2 we dont need the 1.15 velero we will need to backport @alromeros PR to v0.7 of the plugin

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants