-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
waitUntilReady
leaves watches/websocket open upon timeout
#3598
Comments
What version of fabric8 is this on? |
We've observed this with 5.9.0 and 5.7.2. |
shawkins
added a commit
to shawkins/kubernetes-client
that referenced
this issue
Nov 16, 2021
11 tasks
Under the covers the logic should terminate the informer / watch when the future completes - Line 881 in 278ca23
However I do see that the cancel is being applied to the wrong future. I've opened a pr for this. |
Aha, great, thank you @shawkins |
manusa
pushed a commit
to shawkins/kubernetes-client
that referenced
this issue
Nov 22, 2021
manusa
pushed a commit
to shawkins/kubernetes-client
that referenced
this issue
Nov 23, 2021
manusa
pushed a commit
to shawkins/kubernetes-client
that referenced
this issue
Nov 23, 2021
manusa
pushed a commit
that referenced
this issue
Nov 23, 2021
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
It appears that
waitUntilReady
for a Pod will leave a Watch, and therefore websocket connection, open if the Pod fails to become ready in time. This can lead to a buildup of open websocket connections to the k8s API, and eventually resource exhaustion.We've not dug into every nuance, but in our usage we are deleting the pod after the timeout occurs. The watch seems to remain open despite both the timeout and the deletion.
One workaround seems to be to use a custom Watcher, and manually ensure that the ensuing
Watch
is closed when a failure occurs.The text was updated successfully, but these errors were encountered: