-
Notifications
You must be signed in to change notification settings - Fork 210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
uninstall: when --wait is specified, use foreground deletion. #2344
Merged
tommyp1ckles
merged 2 commits into
main
from
pr/tp/use-foreground-deletion-for-uninstall
Feb 28, 2024
Merged
uninstall: when --wait is specified, use foreground deletion. #2344
tommyp1ckles
merged 2 commits into
main
from
pr/tp/use-foreground-deletion-for-uninstall
Feb 28, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
michi-covalent
approved these changes
Feb 28, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks innocent enough
By default, the helm libraries will use background cascading delete which means the call to do helm uninstall returns following the deployment being removed. This means that running workloads, such as hubble-relay, may continue to be in the terminating state following `cilium uninstall --wait` exiting. We depend on this behavior in CI E2E to clean up and reuse clusters for testing Cilium in different configurations. In flakes such as: cilium/cilium#30993 it seems like the old Hubble Pods are bleeding into the "fresh" install. These should be harmless, however this is triggering failures of the [no-error-logs] assertion in the following connectivity tests. This change will provide a more thorough uninstall procedure in this case. Signed-off-by: Tom Hadlaw <[email protected]>
tommyp1ckles
force-pushed
the
pr/tp/use-foreground-deletion-for-uninstall
branch
from
February 28, 2024 01:42
8fa1ac0
to
37c99e4
Compare
tommyp1ckles
force-pushed
the
pr/tp/use-foreground-deletion-for-uninstall
branch
from
February 28, 2024 01:48
37c99e4
to
1b96d74
Compare
The last commit added using foreground cascading delete when doing uninstall with --wait. However, other issues that can occur when reusing clusters following uninstall are: * Old endpoint state written to disk being restored upon reinstall. * CNI deletes can be written to disk in a local queue if Cilium Agent CNI is down, resulting in potential error logs when re-installing cilium and replaying queued CNI DEL commands. When uninstalling with --wait, put disabling Hubble into a seperate uninstall step, which then blocks until there are no more Hubble Pods running. This ensures that Hubble Pods can fully terminate via Cilium without the above situations happening. Because Helm hubble disable uses Helm upgrade, we cannot rely on cascading foreground delete - so we just poll k8s until all Hubble Pods are gone. Signed-off-by: Tom Hadlaw <[email protected]>
tommyp1ckles
force-pushed
the
pr/tp/use-foreground-deletion-for-uninstall
branch
from
February 28, 2024 01:52
1b96d74
to
47f6b9e
Compare
christarazi
approved these changes
Feb 28, 2024
derailed
approved these changes
Feb 28, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tommyp1ckles Nice work!
michi-covalent
approved these changes
Feb 28, 2024
This was referenced Mar 4, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
By default, the helm libraries will use background cascading delete which means the call to do helm uninstall returns following the deployment being removed.
This means that running workloads, such as hubble-relay, may continue to be in the terminating state following
cilium uninstall --wait
exiting.We depend on this behavior in CI E2E to clean up and reuse clusters for testing Cilium in different configurations.
In flakes such as: cilium/cilium#30993 it seems like the old Hubble Pods are bleeding into the "fresh" install. These should be harmless, however this is triggering failures of the [no-error-logs] assertion in the following connectivity tests.
This change will provide a more thorough uninstall procedure in this case.