-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAC address conflict when restoring a virtual machine to alternate namespace #199
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. /close |
@kubevirt-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@mhenriks: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/remove-lifecycle rotten |
@mhenriks @alromeros This will be a critical enhancement to support restore to alternate namespace. Current understanding is that kubevirt-velero-plugin skips VMI restore if VMI is owned by a VM. In the vm we deployed on kubevirt ocp cluster, I see that macaddress is in VM spec and VMI had the firmware uuid. |
I think default behavior should be to preserve the mac in case moving the VM to another namespace but we can support a label on the
If firmware UUID is not specified in the VM, it is calculated by hashing the VM name. I think that hashing the VM UID (or namespace+name) would be better but I'm not sure this is something we can change at this point [1]. We could also support generating a unique firmware id at restore time if that is important to you |
cc @alromeros ^ |
Hey @mhenriks, I agree with the proposed implementation. @30787, we can get quite flexible with how we handle VM backups and restores, as long as the new behavior remains optional and we manage it through labels or annotations on the backup and restore objects. I'm happy to work on implementing this if you're good with these details. |
@mhenriks @alromeros Thank you. Looking forward to this enhancement. |
@alromeros @mhenriks Can you please confirm if this change would be available with the kubevirt-velero-plugin image that would be part of Jan release of oadp 1.4.2 or earlier. |
Hey @30787, so afaik @ShellyKa13 is waiting for velero to merge a go mod fix so that we can bump to the new version and do a release. The feature will hopefully be ready for next week, so if everything goes as expected the change will be available on the Jan release of OADP. |
Hey the merge was done I have the PR for the bump but seems like there are some failures with the new 1.15 velero version which Im looking into.. But anyways for 1.4.2 we dont need the 1.15 velero we will need to backport @alromeros PR to v0.7 of the plugin |
Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
When a virtual machine is restored to an alternate namespace, the virtual machine is restored with the same MAC address as the original virtual machine. This results in a MAC address conflict if the original virtual machine is still running on the original namespace.
What you expected to happen:
Provide a way to blank the MAC address on restore.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
The issue was resolved by updating the plugin to clear the MAC addresses on the restore item action
Environment:
kubectl get deployments cdi-deployment -o yaml
):kubectl version
):The text was updated successfully, but these errors were encountered: