-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubevirt: Defer eve reboot/shutdown/update until drain completes #4494
base: master
Are you sure you want to change the base?
Conversation
An odd yetus failure, not sure what to make of this. |
3c4957b
to
9caa356
Compare
465e6a0
to
e3d900e
Compare
552119f
to
9cce3d4
Compare
As a part of kubevirt-eve we have multiple cluster nodes each hosting app workloads and volume replicas. This implements defer for eve mgmt config operations which will result in unavailability of storage replicas until the cluster volume is not running on a single replica. An example: 1. Node 1 outage and recovers. 2. Before volumes complete rebuilding on node 1 there is a node 2 outage and recovery. 3. Volumes begin rebuilding replicas on nodes 1 and 2. Only available rebuild source is on node 3. 4. User initiated request to reboot/shutdown/update eve-os on node 3. 5. That config request is set to defer until replicas are rebuilt on the other nodes. At a high level the eve-side workflow looks like this: 1. eve config received requesting reboot/shutdown/baseos-image-change to node 1 2. drain requested for node 1 3. zedkube cordons node 1 so that new workloads are blocked from scheduling on that node. 4. zedkube initiates a kubernetes drain of that node removing workloads 5. As a part of drain, PDB (Pod Disruption Budget) at longhorn level determines local replica is the last online one. 6. Drain waits for volume replicas to rebuild across the cluster. 7. Drain completes and NodeDrainStatus message sent to continue original config request. 8. On the next boot event zedkube nodeOnBootHealthStatusWatcher() waits until the local kubernetes node comes online/ready for the first time on each boot event and uncordons it, allowing workloads to be scheduled. Note: For eve baseos image updates this path waits until a new baseos image is fully available locally (LOADED or INSTALLED) and activated before beginning drain. Signed-off-by: Andrew Durbin <[email protected]>
Main dependencies listed here, others pulled in for version conflicts. k8s.io/kubectl/pkg/drain github.com/longhorn/longhorn-manager Signed-off-by: Andrew Durbin <[email protected]>
9cce3d4
to
9c6952e
Compare
yetus checking seems broken: 2025-01-18T02:42:21.7383936Z ./out has been created |
@@ -136,6 +139,8 @@ func Run(ps *pubsub.PubSub, loggerArg *logrus.Logger, logArg *base.LogObject, ar | |||
} | |||
log.Functionf("user containerd ready") | |||
|
|||
initNodeDrainPubSub(ps, &ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The style in this file is to call some init functions around line 104 so it makes sense to follow that pattern for this additional description.
@@ -32,6 +32,17 @@ func baseOsHandleStatusUpdateUUID(ctx *baseOsMgrContext, id string) { | |||
return | |||
} | |||
|
|||
// We want to wait to drain until we're sure we actually have a usable image local. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/local/locally/ ?
}) | ||
if err != nil { | ||
log.Fatalf("initNodeDrainPubSub pubNodeDrainRequest err:%v", err) | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misplaced return?
if drainStatus.Status == kubeapi.NOTREQUESTED || | ||
drainStatus.Status == kubeapi.FAILEDCORDON || | ||
drainStatus.Status == kubeapi.FAILEDDRAIN { | ||
ctx.deferredBaseOsID = id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since drain can take a while there is a possibility that the user changes their mind about which new EVE version to use. Does the logic handle that by updating the deferredBaseOsID the user/API provides a new one while the update is deferred?
ctx.ph.Print("INFO: Node Drain -> %s at %v\n", newStatus.Status.String(), ts) | ||
ctx.ph.Flush() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't follow the pattern for printing in diag.
Where there is an update to something which is printed, then the handlers should call triggerPrintOutput and the printOutput is where the actual information is printed.
Also, if the Status indicates some failure, wouldn't you want to print that as an with ERR prefix?
"github.com/lf-edge/eve/pkg/pillar/pubsub" | ||
) | ||
|
||
func handleNodeDrainStatusCreateNA(ctxArg interface{}, key string, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the "NA" suffix in the function names? We reuse the names across all of the agents so it is odd to have one agent add the agentname to some (but not all) of these functions.
configArg interface{}, _ interface{}) { | ||
newStatus, ok := configArg.(kubeapi.NodeDrainStatus) | ||
if !ok { | ||
log.Errorf("handleNodeDrainStatusImpl invalid type in configArg: %v", configArg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other agents you log.Fatal for this condition.
configArg interface{}, oldConfigArg interface{}) { | ||
newStatus, ok := configArg.(kubeapi.NodeDrainStatus) | ||
if !ok { | ||
log.Errorf("handleNodeDrainStatusImpl invalid type in configArg: %v", configArg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
@@ -77,6 +77,7 @@ type getconfigContext struct { | |||
configGetStatus types.ConfigGetStatus | |||
updateInprogress bool | |||
readSavedConfig bool // Did we already read it? | |||
drainInProgress bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code reads as if this field has the semantics of waitForDrainComplete (or deferForDrain). Is that the semantics in zedagent?
if ctx.poweroffCmdDeferred && | ||
drainInProgress && !status.DrainInProgress { | ||
log.Functionf("Drain complete and deferred poweroff") | ||
ctx.poweroffCmdDeferred = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the code which sets poweroffCmdDeferred due to a drain.
Did you miss a function which sets the various deferred fields due to the local profile server API being used? (I don't recall whether that is a separate function from the controller API.)
if newConfigItemValueMap.GlobalValueInt(types.KubevirtDrainTimeout) != 0 && | ||
newConfigItemValueMap.GlobalValueInt(types.KubevirtDrainTimeout) != | ||
currentConfigItemValueMap.GlobalValueInt(types.KubevirtDrainTimeout) { | ||
log.Functionf("handleGlobalConfigImpl: Updating drainTimeoutHours from %d to %d", | ||
currentConfigItemValueMap.GlobalValueInt(types.KubevirtDrainTimeout), | ||
newConfigItemValueMap.GlobalValueInt(types.KubevirtDrainTimeout)) | ||
z.drainTimeoutHours = newConfigItemValueMap.GlobalValueInt(types.KubevirtDrainTimeout) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code would be readable if you put the current and new in some local variables instead of these very long lines of repeated calls to GobalValueInt.
} | ||
|
||
log.Noticef("handleNodeDrainStatusImplNA to:%v", newStatus) | ||
// NodeDrainStatus Failures here should keep drainInProgress set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there is something catastrophic happening to a cluster (e.g., two out of three nodes die due to hardware failures), is there a way to recover from that and make the single remaining node be usable e.g., as a one node cluster or as a standalone device?
It seems like a reboot might not be usable to recover since (based on this comment) you can't reboot until the drain has succeeded, and that is presumably impossible if the two other nodes died while you were trying to drain.
What happens when the drain timer fires?
Is there a recovery case which does not involve a truck roll and a reinstall of EVE in such a case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to the detailed comments, I think the text in the PR description should go in a markdown file.
Also, at a high-level what are the pre-conditions for the reboot/shutdown/update?
Is it that all three nodes being healthy?
(That would ensure that if there is a hardware failure or crash of another node while this one is rebooting/updating that there is still one node left.)
If while the drain is in progress one of the other nodes in the cluster dies, should the reboot/update be deferred until that node is back?
if override != nil { | ||
zedkubeCtx.pubNodeDrainStatus.Publish("global", override) | ||
} | ||
zedkubeCtx.drainOverrideTimer.Reset(5 * time.Minute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't you Stop the timer here as well like in the initial case?
// Just print the transitions which are linked to lengthy operations or errors | ||
return | ||
} | ||
ctx.ph.Print("INFO: Node Drain -> %s at %v\n", newStatus.Status.String(), ts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this also go in the new console TUI? Check with @rucoder
As a part of kubevirt-eve we have multiple cluster nodes each hosting app workloads and volume replicas.
This implements defer for eve mgmt config operations which will result in unavailability of storage
replicas until the cluster volume is not running on a single replica.
An example:
At a high level the eve-side workflow looks like this:
Note: For eve baseos image updates this path waits until a new baseos image is fully available locally (LOADED or INSTALLED) and activated before beginning drain.
This PR implements original api added in #4366.