Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/redis] lots of ssl errors in pod logs when enable tls for redis sentinel deployment #5612

Closed
shenqinb-star opened this issue Feb 25, 2021 · 13 comments
Labels
on-hold Issues or Pull Requests with this label will never be considered stale

Comments

@shenqinb-star
Copy link

Which chart:
bitnami/redis:12.7.6

Chart values:
sentinel enabled:
image
tls enabled:
image

Describe the bug
After deploying redis sentinel with tls enabled into K8s, I got a lot of ssl errors in pod logs via running kubectl logs redis-sentinel-node-0 redis. this error occurs every second. But the redis works well when I use redis cli to connect.
image

@shenqinb-star
Copy link
Author

  1. I'm wondering where does these errors come from. I'm sure it is not caused by external client connect.
  2. The error still occurs after a disable the livenessProbe and readinessProbe.

@marcosbc
Copy link
Contributor

Hi @shenqinb-star, we supposedly implemented a fix for this in #3061, however, running a helm template I noticed it doesn't appear for the master node anymore. It was apparently removed in #3658.

Since you found the issue, it would be awesome if you wanted to contribute with a fix for this change. In principle you would only need to add the --tls-replication yes flags to the start-master.sh script in the same way it's in the start-slave.sh script. Are you up for it?

@shenqinb-star
Copy link
Author

shenqinb-star commented Feb 26, 2021

Hi @marcosbc, thanks for your replay. I'm happy to contribute a fix. currently the change should be made in the template/configmap-scripts.yaml, am I right?

@shenqinb-star
Copy link
Author

shenqinb-star commented Feb 26, 2021

Hi @marcosbc, I did following fix in my laptop, but I still got the ssl errors, Am I miss something?

  1. download helm chart to my laptop helm pull bitnami/redis:12.7.7 --untar
  2. add ARGS+=("--tls-replication" "yes") to templates/configmap-scripts.yaml
    image
  3. install local chart helm install redis-sentinel ./redis

@shenqinb-star
Copy link
Author

after reading templates/configmap-scripts.yaml, it only run start-node.sh and start-sentinel.sh when set cluster.enabled =true and sentinel.enabled=true. start-master.sh doesn't take effect.

@marcosbc
Copy link
Contributor

@shenqinb-star You seem to be right, I forgot to enable sentinel.enable in my local copy.

If you run kubectl describe configmap NAME-scripts (where NAME is your deployment name), can you see --tls-replication being enabled in start-node.sh?

@shenqinb-star
Copy link
Author

@marcosbc I checked, Yes, both start-node.sh and start-sentinel.sh has ARGS+=("--tls-replication" "yes")

@shenqinb-star
Copy link
Author

shenqinb-star commented Mar 1, 2021

since the error occurs every 1 second, I guess the potential cause is Redis use non-tls for heartbeat health checking. the command like redis-cli -h localhost -p 6379 -a <password> ping. but I didn't find the place to re-configure it.

@shenqinb-star
Copy link
Author

hi @marcosbc , any thoughts about this issue?

@marcosbc
Copy link
Contributor

marcosbc commented Mar 2, 2021

Hi @shenqinb-star, note that you can see if the --tls flag appears when executing helm template, at the health-configmap.yaml file. When I run the command it does appear for me.

Right now I have no idea what may be causing this. The --tls-replication is a server-side flag and is not applicable for redis-cli.

For now, I have created an internal task for looking into this, but unfortunately I cannot give an ETA for when we will start looking into this. If you happen to find a solution, feel free to send a PR, we'd be happy to help with releasing a new chart with those changes.

Also cc/ @javsalgar in case he has any idea of where to start looking.

@marcosbc marcosbc added the on-hold Issues or Pull Requests with this label will never be considered stale label Mar 2, 2021
@shenqinb-star
Copy link
Author

hi @marcosbc, thanks for your help, please let me know if you have any updates.

@carrodher
Copy link
Member

Unfortunately, this issue was created a long time ago and although there is an internal task to fix it, it was not prioritized as something to address in the short/mid term. It's not a technical reason but something related to the capacity since we're a small team.

Being said that, contributions via PRs are more than welcome in both repositories (containers and charts). Just in case you would like to contribute.

During this time, there are several releases of this asset and it's possible the issue has gone as part of other changes. If that's not the case and you are still experiencing this issue, please feel free to reopen it and we will re-evaluate it.

@arshavardan
Copy link

arshavardan commented Feb 5, 2024

I'm facing this issue!
Has anyone got any update on this? @shenqinb-star @marcosbc @carrodher

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
on-hold Issues or Pull Requests with this label will never be considered stale
Projects
None yet
Development

No branches or pull requests

4 participants