-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[redis] Sentinel from master does not recover itself #3562
Comments
Hi, We have plans on improving our current Redis Sentinel configuration as we detected some failover issues. We will update this ticket as soon as we have more news. Thank you very much for reporting!! |
Is there any solution already on this Topic? We are currently facing the same Issue. |
Hi, Could you share the logs of the issue to see if there's any difference with the ones shared by the OP? Is it something you can easily reproduce by removing the pods? We've been doing improvements to the failover mechanism and we would like to understand what caused the issue this time. |
Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team. Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute. During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it. |
Which chart:
bitnami/redis, 10.7.16
Describe the bug
We're using Redis Helm Chart topology with one master and two slaves.
When the sentinel container inside the pod which contains the master crashes, it is not able to recover itself.
In the logs, we can see the error below:
To Reproduce
Steps to reproduce the behavior:
Where values.yaml is:
Redis container in the master pod:
Sentinel container in the master pod:
Master pod ip:
Note that the my-redis-slave-0 sentinel crashes and it is able to recover itself.
Shutdown the sentinel inside the pod which contains the master:
In the log of the sentinel container of the my-redis-master-0 pod:
In the log of the sentinel container of the my-redis-slave-0 pod:
Expected behavior
It is expected that the sentinel container inside the master pod to be able to recover itself just as the one in the slave pod were able to (step 4 above).
Version of Helm and Kubernetes:
helm version
:kubectl version
:The text was updated successfully, but these errors were encountered: