-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EBS volumes cannot reattach to PetSet after unexpected detachment #37662
Comments
I can confirm this issue. As far as I have read all the EBS volume issues it should be fixed with open PR #37302 |
@patzeltjonas Thanks, that is excellent news! |
The fix #36840 is merged in master. It should be back ported to release 1.4 soon. Please let me know if you have any issue after upgrading. Thanks! |
I see via #37867 has been placed in release-1.4 branch, any idea on where I can find when the next 1.4 release is planned? |
Updated to v1.5.0 (simultaneously upgraded from PetSet to PersistentSet). I have experimented with some automated tests that bring these pods up and down rapidly in a very similar set of circumstances that would quickly break v1.4.6. I have yet to see this issue since the update. It certainly appears to be resolved! As for the linked issues, I have not seen #37844. I can confirm that I do occasionally see the VolumeInUse issue from #37854, but it is a much lower severity for us. |
@demotivated, thank you for your update. You mentioned you oaccasionally see VolumeInUse issue, could you please let me know more details about it or share some log when it happened? Thanks a lot! |
@jingxu97 I have not seen the issue in several days and unfortunately have no logs to share. The sequence typically looks like this:
|
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
Yes
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
I am aware of similar
issue #29166 which was fixed in #36616 and v1.4.6. However, I can
still reproduce as of v1.4.6.
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug Report
Kubernetes version (use
kubectl version
):v1.4.6
Environment:
uname -a
): 4.7.3-coreos-r2What happened:
Periodically, petsets will drop below the number of desired replicas
and be unable to restore themselves.
The petset shows the following error:
What you expected to happen:
I expect EBS volumes to reattach to the correct pod automatically
following node failure.
How to reproduce it (as minimally and precisely as possible):
happen at all, but this is the most reliable way I've found
to reproduce. It has also occurred randomly.
remains stuck in state
ContainerCreating
.Anything else do we need to know:
I have been able to so far work around this issue by terminating the node
the pod is attempting to attach onto.
The text was updated successfully, but these errors were encountered: