-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Delete machine #40
Conversation
3bad20a
to
ad1bbb3
Compare
5014a02
to
0aeab0e
Compare
0aeab0e
to
777c22c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good and its good to remove the cluster finalizers.
machineScope.SetNotReady(infrav1.MicrovmDeleteFailedReason, clusterv1.ConditionSeverityError, "") | ||
|
||
return ctrl.Result{}, err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the call to Delete
should we requeue and on the next reconciliation check the status? Flintlock may have an issue itself reconciling the delete and so we should wait until its actually been deleted. wdyt?
We should check in flintlock that the Get
has distinct state detrmination for deleting
vs deleted
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And then only remove the finalizer when it gets to a deleted / not found state
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 makes sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flintlock does not have any state of deleted or deleting. "Deleted" is unnecessary, because where would we record that after state is deleted. We could introduce a "deleting" phase and check that here, only issuing a Delete call if not already processing.
Flintlock does do its own "should I delete" check of course, but it's good to avoid redundant API calls if we can.
For now I am just going to requeue after the Delete call. If the FL delete fails and the requeue comes around again, flintlock should not make a second attempt in the plan.
I will add a ticket to Flintlock to discuss a "deleting" phase on the state.
4a55a12
to
26bbef2
Compare
26bbef2
to
55fa34a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding of CAPI stuff is pretty limited still, but this looks ok to me.
55fa34a
to
e0ca8a6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm (only that nested if
that hurts my precious eyes a bit)
We do not currently create any supporting infrastructure for our machines. We can add this back if we need it.
e0ca8a6
to
cdae6cd
Compare
What this PR does / why we need it:
Implements MicroVMMachine deletion reconcile.
Calls the flintlock API.
LMK if I have missed any CAPI-type things, was
mostly guessing.
Also removes finalizers from the Cluster controller
as we do not do any supporting infrastructure
creation there. We can add this back later if it changes.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #13
Special notes for your reviewer:
Successfully ran a manual create+delete on my local machine.
Checklist: