-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sentinels are not working in failover case #3472
Comments
Hi, It seems that you are using the |
Hi, Its the Redis charts from bitnami charts, I have used just the naming redis-ha. KIndly check out the full configuration by which the redis sentinel based ha is deployed. Thanks, |
Hi, Sorry I misread the yaml. Could you add proper code blocks so it's more readable? Regarding the client, do you have the go code you are using for testing? I would like to reproduce the exact issue. |
Hi, Please find the snippet of my current client. if config.RedisSentinel { err := RClient.Ping(ctx).Err() log.Print("Successfully connected to the redis master server:", config.RedisSentinelAddrs) Thanks, |
Hi, I imagine this will require importing some libraries and executing it with a set of commands. Could you provide a link to a github repo that I can clone, execute make and do the testing? Sorry for the inconvenience but it would be very helpful for the engineering team. |
Hi, We are not using any kind of git repo and its same code, which we are using as a client. Thanks, |
Hi, I imagine that, in your code, the use of |
Hi,
|
Hi, Just a quick note that I was able to reproduce a failover error issue. I will forward this to the engineering team so we can work on a fix. As soon as I have more news, we will update this ticket. |
Hi, Thanks for the confirmation on this issue. Thanks, |
Can I hear some noise on this? :) |
Hi, We have planned working on this during the following weeks, as soon as there are more news, we will update the ticket. |
Great! Thanks |
Hi @mahanam , |
Hi @rafariossaa It is basically a 3 node setup on which am deploying my redis-ha but Let me know if you need more details regarding the system. |
Hi, I hope your queries got answered in the previous comment, Any possibility to update on the status as it's still in the hold state. Thanks |
Hi, |
Hi, I have tried this fix from this branch "rafariossaa:working_redis_fix" and its the same as previous issue. Please find the simple steps below which I have followed for your reference.
======================================================= #""For more logs,"" sentinel logs of the new pod [10.233.121.251] redis logs of the new pod [10.233.121.251] ================================================= Sentinel: networkPolicy: usePassword: false securityContext: command: "redis-server" disableCommands - FLUSHDB- FLUSHALLcommand: "redis-server" #for slave metrics: configmap: |- =========================================================== [debug] SERVER: "127.0.0.1:43627" Release "redis-master" does not exist. Installing it now. NAME: redis-master Enable AOF https://redis.io/topics/persistence#append-only-fileappendonly yes Disable RDB persistence, AOF persistence already enabled.save ""
COMPUTED VALUES: Enable AOF https://redis.io/topics/persistence#append-only-fileappendonly yes Disable RDB persistence, AOF persistence already enabled.save ""
HOOKS: Source: redis/templates/configmap-scripts.yamlapiVersion: v1
start-sentinel.sh: |
Source: redis/templates/configmap.yamlapiVersion: v1
|
we created chart using image built from redis 6.0.8 code base, and ran couple of performance comparision tests ./redis-benchmark -h 10.233.127.60 -t hset -r 100000 -n 1000000 The above command provided 64968.81 requests per second on a stand-alone redis server with the helm chart built using redis-6.0.8 code base, where as, bitnami charts you provided, gave us around 62262.62 requests per second. snippet of our dockerfile ############################################################### Bitnami Templates Results: **: Our template using redis code base ** ====== HSET ====== Bitnami Templates DB config ( defaults are not changed)
Our Templates DB config: DB config:
Bitnami charts redis server info our template redis server info: |
Hi @etavene , Thanks forehand. |
NAME: redis-ha
REVISION: 1
RELEASED: Wed Aug 19 19:04:31 2020
CHART: redis-10.7.16
USER-SUPPLIED VALUES:
{}
Actually i triggered my redis-go client towards my setup => [single master {redis+sentinel} and 3 slaves{redis+sentinel}]
=========================================================
While in this process, it reconnects the appropriate master after a client restart
======================================================
=========================================================
Logs:
2020/08/19 19:13:07 Initializing the redis client...
2020/08/19 19:13:07 Getting the sentinel redis master...
redis: 2020/08/19 19:13:07 sentinel.go:470: sentinel: discovered new sentinel="7a732b2572c10a61e22e6e79885d1286b943669d" for master="mymaster"
redis: 2020/08/19 19:13:07 sentinel.go:470: sentinel: discovered new sentinel="852f11a9d3634574e8786838ce5fb26c43f6bec1" for master="mymaster"
redis: 2020/08/19 19:13:07 sentinel.go:470: sentinel: discovered new sentinel="3301ae7a855f25d2a6b0085a879c3609be823e72" for master="mymaster"
redis: 2020/08/19 19:13:07 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.7:6379"
2020/08/19 19:13:07 Successfully connected to the redis master server:[10.233.13.207:26379]
2020/08/19 19:13:07 Retrieved key 0:0
2020/08/19 19:13:08 Retrieved key 1:1
2020/08/19 19:13:09 Retrieved key 2:2
2020/08/19 19:13:10 Retrieved key 3:3
2020/08/19 19:13:11 Retrieved key 4:4
2020/08/19 19:13:12 Retrieved key 5:5
2020/08/19 19:13:13 Retrieved key 6:6
2020/08/19 19:13:14 Retrieved key 7:7
2020/08/19 19:13:15 Retrieved key 8:8
2020/08/19 19:13:16 Retrieved key 9:9
2020/08/19 19:13:17 Retrieved key 10:10
2020/08/19 19:13:18 Retrieved key 11:11
2020/08/19 19:13:19 Retrieved key 12:12
2020/08/19 19:13:20 Retrieved key 13:13
2020/08/19 19:13:21 Retrieved key 14:14
2020/08/19 19:13:22 Retrieved key 15:15
2020/08/19 19:13:25 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:27 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:27 Retrieved key 16:
2020/08/19 19:13:29 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:30 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:30 Retrieved key 17:
2020/08/19 19:13:33 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:35 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:35 Retrieved key 18:
2020/08/19 19:13:38 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:40 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:40 Retrieved key 19:
2020/08/19 19:13:43 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:44 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:44 Retrieved key 20:
2020/08/19 19:13:47 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:49 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:49 Retrieved key 21:
2020/08/19 19:13:51 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:52 dial tcp 10.233.77.7:6379: connect: invalid argument
2020/08/19 19:13:52 Retrieved key 22:
redis: 2020/08/19 19:13:53 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.11:6379"
2020/08/19 19:13:53 Retrieved key 23:23
2020/08/19 19:13:54 Retrieved key 24:24
2020/08/19 19:13:55 Retrieved key 25:25
2020/08/19 19:13:56 Retrieved key 26:26
2020/08/19 19:13:57 Retrieved key 27:27
2020/08/19 19:13:58 Retrieved key 28:28
...
2020/08/19 19:15:11 Retrieved key 100:100
2020/08/19 19:15:12 Retrieved key 101:101
redis: 2020/08/19 19:15:13 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.12:6379"
2020/08/19 19:15:13 Retrieved key 102:102
2020/08/19 19:15:14 Retrieved key 103:103
2020/08/19 19:15:15 Retrieved key 104:104
2020/08/19 19:15:16 Retrieved key 105:105
...
2020/08/19 19:16:47 Retrieved key 195:195
redis: 2020/08/19 19:16:48 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.11:6379"
2020/08/19 19:16:50 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:16:52 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:16:52 Retrieved key 196:
2020/08/19 19:16:54 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:16:56 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:16:56 Retrieved key 197:
2020/08/19 19:16:58 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:00 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:00 Retrieved key 198:
2020/08/19 19:17:02 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:03 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:03 Retrieved key 199:
2020/08/19 19:17:05 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:07 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:07 Retrieved key 200:
2020/08/19 19:17:09 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:10 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:10 Retrieved key 201:
2020/08/19 19:17:13 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:14 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:14 Retrieved key 202:
2020/08/19 19:17:17 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:18 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:18 Retrieved key 203:
2020/08/19 19:17:21 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:23 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:23 Retrieved key 204:
2020/08/19 19:17:26 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:27 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:27 Retrieved key 205:
2020/08/19 19:17:30 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:31 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:31 Retrieved key 206:
2020/08/19 19:17:34 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:35 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:35 Retrieved key 207:
2020/08/19 19:17:37 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:38 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:38 Retrieved key 208:
2020/08/19 19:17:41 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:43 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:43 Retrieved key 209:
2020/08/19 19:17:46 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:48 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:48 Retrieved key 210:
2020/08/19 19:17:50 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:51 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:51 Retrieved key 211:
2020/08/19 19:17:54 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:56 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:56 Retrieved key 212:
2020/08/19 19:17:58 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:59 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:17:59 Retrieved key 213:
2020/08/19 19:18:02 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:04 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:04 Retrieved key 214:
2020/08/19 19:18:06 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:08 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:08 Retrieved key 215:
2020/08/19 19:18:10 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:12 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:12 Retrieved key 216:
2020/08/19 19:18:14 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:17 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:17 Retrieved key 217:
2020/08/19 19:18:19 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:20 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:20 Retrieved key 218:
2020/08/19 19:18:22 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:24 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:18:24 Retrieved key 219:
===============================================================================
2020/08/19 19:18:58 Initializing the redis client...
2020/08/19 19:18:58 Getting the sentinel redis master...
redis: 2020/08/19 19:18:58 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.13:6379"
2020/08/19 19:18:58 Successfully connected to the redis master server:[10.233.13.207:26379]
2020/08/19 19:18:58 Retrieved key 0:0
2020/08/19 19:18:59 Retrieved key 1:1
2020/08/19 19:19:00 Retrieved key 2:2
2020/08/19 19:19:01 Retrieved key 3:3
2020/08/19 19:19:02 Retrieved key 4:4
2020/08/19 19:19:03 Retrieved key 5:5
2020/08/19 19:19:04 Retrieved key 6:6
2020/08/19 19:19:05 Retrieved key 7:7
2020/08/19 19:19:06 Retrieved key 8:8
2020/08/19 19:19:07 Retrieved key 9:9
=============================================================================================
2020/08/19 19:20:20 Initializing the redis client...
2020/08/19 19:20:20 Getting the sentinel redis master...
redis: 2020/08/19 19:20:20 sentinel.go:470: sentinel: discovered new sentinel="852f11a9d3634574e8786838ce5fb26c43f6bec1" for master="mymaster"
redis: 2020/08/19 19:20:20 sentinel.go:470: sentinel: discovered new sentinel="3301ae7a855f25d2a6b0085a879c3609be823e72" for master="mymaster"
redis: 2020/08/19 19:20:20 sentinel.go:470: sentinel: discovered new sentinel="57bff9e9e2cca87c135ec4eefc146a08f4654b9d" for master="mymaster"
redis: 2020/08/19 19:20:20 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.11:6379"
2020/08/19 19:20:22 Connection failed with redis master serverdial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:20:22 Successfully connected to the redis master server:[10.233.13.207:26379]
2020/08/19 19:20:24 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:20:25 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:20:25 Retrieved key 0:
2020/08/19 19:20:27 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:20:29 dial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:20:29 Retrieved key 1:
2020/08/19 19:20:32 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument
2020/08/19 19:20:34 dial tcp 10.233.77.11:6379: connect: invalid argument
2
Expected behavior
If you delete the current master pod the client should be able to connect the right redis master after failover window
Version of Helm and Kubernetes:
demo@demo1:~$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:20:25Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
demo@demo1:~$ helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
kubectl version
:demo@demo1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Configuration of debug manifest files:
[debug] Created tunnel using local port: '35084'
[debug] SERVER: "127.0.0.1:35084"
Release "redis-ha" does not exist. Installing it now.
[debug] CHART PATH: /home/demo/git/sdpon-manifest/templates/sdponcharts/charts/voltha/charts/redis-ha
NAME: redis-ha
REVISION: 1
RELEASED: Wed Aug 19 19:04:31 2020
CHART: redis-10.7.16
USER-SUPPLIED VALUES:
{}
COMPUTED VALUES:
cluster:
enabled: true
slaveCount: 3
clusterDomain: cluster.local
configmap: |-
Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
appendfsync everysec
#no-appendfsync-on-rewrite no
#save 900 1
#save 300 10
#save 60 10000
Disable RDB persistence, AOF persistence already enabled.
save ""
global:
redis: {}
image:
pullPolicy: IfNotPresent
registry: docker-registry.com:5000
repository: redis
tag: 6.0.6
master:
affinity: {}
command: redis-server
configmap: null
customLivenessProbe: {}
customReadinessProbe: {}
extraFlags: []
livenessProbe:
enabled: true
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
persistence:
accessModes:
- ReadWriteOnce
enabled: true
matchExpressions: {}
matchLabels: {}
path: /data
size: 8Gi
subPath: ""
podAnnotations: {}
podLabels: {}
priorityClassName: {}
readinessProbe:
enabled: true
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
service:
annotations: {}
labels: {}
loadBalancerIP: null
port: 6379
type: ClusterIP
statefulset:
updateStrategy: RollingUpdate
metrics:
enabled: false
image:
pullPolicy: IfNotPresent
registry: docker.io
repository: bitnami/redis-exporter
tag: 1.9.0-debian-10-r20
podAnnotations:
prometheus.io/port: "9121"
prometheus.io/scrape: "true"
prometheusRule:
additionalLabels: {}
enabled: false
namespace: ""
rules: []
service:
annotations: {}
labels: {}
type: ClusterIP
serviceMonitor:
enabled: false
selector:
prometheus: kube-prometheus
networkPolicy:
enabled: true
ingressNSMatchLabels: {}
ingressNSPodMatchLabels: {}
password: null
persistence:
existingClaim: null
podDisruptionBudget:
enabled: false
minAvailable: 1
podSecurityPolicy:
create: false
rbac:
create: false
role:
rules: []
redisPort: 6379
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 0
sentinel:
configmap: null
customLivenessProbe: {}
customReadinessProbe: {}
downAfterMilliseconds: 10000
enabled: true
failoverTimeout: 9000
image:
pullPolicy: IfNotPresent
registry: docker.io
repository: bitnami/redis-sentinel
tag: 6.0.6-debian-10-r11
initialCheckTimeout: 5
livenessProbe:
enabled: false
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
masterSet: mymaster
parallelSyncs: 1
port: 26379
quorum: 2
readinessProbe:
enabled: false
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
service:
annotations: {}
labels: {}
loadBalancerIP: null
redisPort: 6379
sentinelPort: 26379
type: ClusterIP
staticID: false
usePassword: true
serviceAccount:
create: false
name: null
slave:
affinity: {}
command: redis-server
configmap: null
customLivenessProbe: {}
customReadinessProbe: {}
extraFlags: []
livenessProbe:
enabled: true
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
persistence:
accessModes:
- ReadWriteOnce
enabled: true
matchExpressions: {}
matchLabels: {}
path: /data
size: 8Gi
subPath: ""
podAnnotations: {}
podLabels: {}
port: 6379
readinessProbe:
enabled: true
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
service:
annotations: {}
labels: {}
loadBalancerIP: null
port: 6379
type: ClusterIP
spreadConstraints: {}
statefulset:
updateStrategy: RollingUpdate
sysctlImage:
command: []
enabled: false
mountHostSys: false
pullPolicy: Always
registry: docker.io
repository: bitnami/minideb
resources: {}
tag: buster
tls:
authClients: true
certCAFilename: null
certFilename: null
certKeyFilename: null
certificatesSecret: null
enabled: false
usePassword: true
usePasswordFile: false
volumePermissions:
enabled: false
image:
pullPolicy: Always
registry: docker.io
repository: bitnami/minideb
tag: buster
resources: {}
HOOKS:
MANIFEST:
Source: redis/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: redis-ha
namespace: default
labels:
app: redis
chart: redis-10.7.16
release: redis-ha
heritage: Tiller
spec:
podSelector:
matchLabels:
app: redis
release: redis-ha
policyTypes:
- Ingress
- Egress
egress:
# Allow dns resolution
- ports:
- port: 53
protocol: UDP
# Allow outbound connections to other cluster pods
- ports:
- port: 6379
- port: 26379
to:
- podSelector:
matchLabels:
app: redis
release: redis-ha
ingress:
# Allow inbound connections
- ports:
- port: 6379
- port: 26379
from:
- podSelector:
matchLabels:
redis-ha-client: "true"
- podSelector:
matchLabels:
app: redis
release: redis-ha
Source: redis/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: redis-ha
namespace: default
labels:
app: redis
chart: redis-10.7.16
release: "redis-ha"
heritage: "Tiller"
type: Opaque
data:
redis-password: "aUZGUkFoc1hxMA=="
Source: redis/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-ha
namespace: default
labels:
app: redis
chart: redis-10.7.16
heritage: Tiller
release: redis-ha
data:
redis.conf: |-
# User-supplied configuration:
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
appendfsync everysec
#no-appendfsync-on-rewrite no
#save 900 1
#save 300 10
#save 60 10000
# Disable RDB persistence, AOF persistence already enabled.
save ""
master.conf: |-
dir /data
replica.conf: |-
dir /data
slave-read-only yes
sentinel.conf: |-
dir "/tmp"
bind 0.0.0.0
port 26379
sentinel monitor mymaster redis-ha-master-0.redis-ha-headless.default.svc.cluster.local 6379 2
sentinel down-after-milliseconds mymaster 10000
sentinel failover-timeout mymaster 9000
sentinel parallel-syncs mymaster 1
Source: redis/templates/health-configmap.yaml
apiVersion: v1$1 || exit_status=$ ?$1 || exit_status=$ ?$1 || exit_status=$ ?$1 || exit_status=$ ?
kind: ConfigMap
metadata:
name: redis-ha-health
namespace: default
labels:
app: redis
chart: redis-10.7.16
heritage: Tiller
release: redis-ha
data:
ping_readiness_local.sh: |-
#!/bin/bash
no_auth_warning=$([[ "$(redis-cli --version)" =~ (redis-cli 5.) ]] && echo --no-auth-warning)
response=$(
timeout -s 3 $1
redis-cli
-a $REDIS_PASSWORD $no_auth_warning
-h localhost
-p $REDIS_PORT
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
no_auth_warning=$([[ "$(redis-cli --version)" =~ (redis-cli 5.) ]] && echo --no-auth-warning)
response=$(
timeout -s 3 $1
redis-cli
-a $REDIS_PASSWORD $no_auth_warning
-h localhost
-p $REDIS_PORT
ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
echo "$response"
exit 1
fi
ping_sentinel.sh: |-
#!/bin/bash
no_auth_warning=$([[ "$(redis-cli --version)" =~ (redis-cli 5.) ]] && echo --no-auth-warning)
response=$(
timeout -s 3 $1
redis-cli
-a $REDIS_PASSWORD $no_auth_warning
-h localhost
-p $REDIS_SENTINEL_PORT
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
parse_sentinels.awk: |-
/ip/ {FOUND_IP=1}
/port/ {FOUND_PORT=1}
/runid/ {FOUND_RUNID=1}
!/ip|port|runid/ {
if (FOUND_IP==1) {
IP=$1; FOUND_IP=0;
}
else if (FOUND_PORT==1) {
PORT=$1;
FOUND_PORT=0;
} else if (FOUND_RUNID==1) {
printf "\nsentinel known-sentinel mymaster %s %s %s", IP, PORT, $0; FOUND_RUNID=0;
}
}
ping_readiness_master.sh: |-
#!/bin/bash
no_auth_warning=$([[ "$(redis-cli --version)" =~ (redis-cli 5.) ]] && echo --no-auth-warning)
response=$(
timeout -s 3 $1
redis-cli
-a $REDIS_MASTER_PASSWORD $no_auth_warning
-h $REDIS_MASTER_HOST
-p $REDIS_MASTER_PORT_NUMBER
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
ping_liveness_master.sh: |-
#!/bin/bash
no_auth_warning=$([[ "$(redis-cli --version)" =~ (redis-cli 5.*) ]] && echo --no-auth-warning)
response=$(
timeout -s 3 $1
redis-cli
-a $REDIS_MASTER_PASSWORD $no_auth_warning
-h $REDIS_MASTER_HOST
-p $REDIS_MASTER_PORT_NUMBER
ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
echo "$response"
exit 1
fi
ping_readiness_local_and_master.sh: |-
script_dir="$(dirname "$0")"
exit_status=0
"$script_dir/ping_readiness_local.sh"
"$script_dir/ping_readiness_master.sh"
exit $exit_status
ping_liveness_local_and_master.sh: |-
script_dir="$(dirname "$0")"
exit_status=0
"$script_dir/ping_liveness_local.sh"
"$script_dir/ping_liveness_master.sh"
exit $exit_status
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-redis-ha-slave-2
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-redis-ha-slave-1
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
#path: "/mnt/data"
path: "/data"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-redis-ha-master-0
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
#path: "/mnt/data"
path: "/data"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-data-redis-ha-slave-0
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
#path: "/mnt/data"
path: "/data"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-redis-ha-slave-2
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-redis-ha-slave-1
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-redis-ha-master-0
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
Source: redis/templates/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-redis-ha-slave-0
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
Source: redis/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-ha-headless
namespace: default
labels:
app: redis
chart: redis-10.7.16
release: redis-ha
heritage: Tiller
spec:
type: ClusterIP
clusterIP: None
ports:
- name: redis
port: 6379
targetPort: redis
- name: redis-sentinel
port: 26379
targetPort: redis-sentinel
selector:
app: redis
release: redis-ha
Source: redis/templates/redis-with-sentinel-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-ha
namespace: default
labels:
app: redis
chart: redis-10.7.16
release: redis-ha
heritage: Tiller
spec:
type: ClusterIP
ports:
- name: redis
port: 6379
targetPort: redis
- name: redis-sentinel
port: 26379
targetPort: redis-sentinel
selector:
app: redis
release: redis-ha
Source: redis/templates/redis-master-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-ha-master
namespace: default
labels:
app: redis
chart: redis-10.7.16
release: redis-ha
heritage: Tiller
spec:
selector:
matchLabels:
app: redis
release: redis-ha
role: master
serviceName: redis-ha-headless
template:
metadata:
labels:
app: redis
chart: redis-10.7.16
release: redis-ha
role: master
annotations:
checksum/health: 72098cadec6e6d8bf78527a9626bc96fe2f2dae070bb60d27a7c046c37eea287
checksum/configmap: 7ebd09346d8dfcacb96ee1ddb30e75d7e7e5344a985f5eacd4c2fdeeedfa7fa7
checksum/secret: 3c1003cf7a0ca691891f8285ee98aeb4465fa121f821b0d07e77579849a19cff
spec:
volumeClaimTemplates:
- metadata:
name: redis-data
labels:
app: redis
release: redis-ha
heritage: Tiller
component: master
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
updateStrategy:
type: RollingUpdate
Source: redis/templates/redis-slave-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-ha-slave
namespace: default
labels:
app: redis
chart: redis-10.7.16
release: redis-ha
heritage: Tiller
spec:
replicas: 3
serviceName: redis-ha-headless
selector:
matchLabels:
app: redis
release: redis-ha
role: slave
template:
metadata:
labels:
app: redis
release: redis-ha
chart: redis-10.7.16
role: slave
annotations:
checksum/health: 72098cadec6e6d8bf78527a9626bc96fe2f2dae070bb60d27a7c046c37eea287
checksum/configmap: 7ebd09346d8dfcacb96ee1ddb30e75d7e7e5344a985f5eacd4c2fdeeedfa7fa7
checksum/secret: e8633c629f9222be7f704afcdf994d39c2d3297da0bdbab784b8c0d689308566
spec:
volumeClaimTemplates:
- metadata:
name: redis-data
labels:
app: redis
release: redis-ha
heritage: Tiller
component: slave
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
updateStrategy:
type: RollingUpdate
LAST DEPLOYED: Wed Aug 19 19:04:31 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME AGE
redis-ha 0s
redis-ha-health 0s
==> v1/NetworkPolicy
NAME AGE
redis-ha 0s
==> v1/PersistentVolume
NAME AGE
redis-data-redis-ha-master-0 0s
redis-data-redis-ha-slave-0 0s
redis-data-redis-ha-slave-1 0s
redis-data-redis-ha-slave-2 0s
==> v1/PersistentVolumeClaim
NAME AGE
redis-data-redis-ha-master-0 0s
redis-data-redis-ha-slave-0 0s
redis-data-redis-ha-slave-1 0s
redis-data-redis-ha-slave-2 0s
==> v1/Pod(related)
NAME AGE
redis-ha-master-0 0s
redis-ha-slave-0 0s
==> v1/Secret
NAME AGE
redis-ha 0s
==> v1/Service
NAME AGE
redis-ha 0s
redis-ha-headless 0s
==> v1/StatefulSet
NAME AGE
redis-ha-master 0s
redis-ha-slave 0s
NOTES:
** Please be patient while the chart is being deployed **
Redis can be accessed via port 6379 on the following DNS name from within your cluster:
redis-ha.default.svc.cluster.local for read only operations
For read/write operations, first access the Redis Sentinel cluster, which is available in port 26379 using the same domain name above.
Note: Since NetworkPolicy is enabled, only pods with label
redis-ha-client=true"
will be able to connect to redis.
The text was updated successfully, but these errors were encountered: