Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gitea(v 1.20.3) is not coming up after redis cluster pod ip is changed. #26893

Closed
Baitanik opened this issue Sep 4, 2023 · 4 comments
Closed
Labels
issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail

Comments

@Baitanik
Copy link

Baitanik commented Sep 4, 2023

Description

We are installing gitea rootless 1.20.3 with helm chart 9.2.0 in azure k8s env.
We have dual stack (Ipv4 & Ipv6) configured in k8s .
Fresh install it is working fine.. redis pods with replica 6 (default) is coming up & gitea is also able to connect.

Now when the cluster is rebooted , as usual all pod IP is getting changed ... Now gitea is somewhere internally maintaining old Redis IP and due to that gitea is trying to connect to redis with old IP and it is not coming up after cluster reboot.

Below are the snippets of describe pods of the same. Error log is also mentioned at the end.

Before reboot :

Name:         xxx-gitea-redis-cluster-0
Namespace:   xxx-gitea-v120
Priority:     0
Node:         aks-xxxadmpool-12601847-vmss000004/10.21.59.11
Start Time:   Fri, 01 Sep 2023 09:59:51 +0000
Labels:       app.kubernetes.io/instance=xxx-gitea
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=redis-cluster
                  controller-revision-hash=xxx-gitea-redis-cluster-cc896c5b4
                  helm.sh/chart=redis-cluster-8.6.9
                  statefulset.kubernetes.io/pod-name=xxx-gitea-redis-cluster-0
                 Annotations:  checksum/config: e66322f24abef75632d7c5335b085dac39cc52cba33a41fcaa2ce3cf4f41de65
              checksum/scripts: 40078a148340be5bb4194b7a9f71cc5472de9f6a1420054ece727d5cffa90ca9
              checksum/secret: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Status:       Running
IP:           10.244.1.28
IPs:
  IP:           10.244.1.28 
  IP:           fd12:3456:789a:0:1::701 
Controlled By:  StatefulSet/xxx-gitea-redis-cluster
Containers:
  xxx-gitea-redis-cluster:
    Image:         docker.io/bitnami/redis-cluster:7.0.12-debian-11-r2

After reboot

 
Name:         xxx-gitea-redis-cluster-0
Namespace:    xxx-gitea-v120
Priority:     0
Node:         aks-ccmadmpool-12601847-vmss000007/10.21.59.11
Start Time:   Mon, 04 Sep 2023 05:14:06 +0000
Labels:       app.kubernetes.io/instance=xxx-gitea
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=redis-cluster
                  controller-revision-hash=xxx-gitea-redis-cluster-cc896c5b4
                  helm.sh/chart=redis-cluster-8.6.9
                  statefulset.kubernetes.io/pod-name=xxx-gitea-redis-cluster-0
Annotations:  checksum/config: e66322f24abef75632d7c5335b085dac39cc52cba33a41fcaa2ce3cf4f41de65
                       checksum/scripts: 40078a148340be5bb4194b7a9f71cc5472de9f6a1420054ece727d5cffa90ca9
                       checksum/secret: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Status:       Running
IP:           10.244.1.68
IPs:
  IP:           10.244.1.68 
  IP:           fd12:3456:789a:0:1::44 
Controlled By:  StatefulSet/xxx-gitea-redis-cluster
Containers:

In gitea log after reboot ::

2023/09/04 05:30:53 ...les/storage/local.go:33:NewLocalStorage() [I] Creating new Local Storage at /data/packages
2023/09/04 05:31:06 routers/init.go:60:mustInit() [F] code.gitea.io/gitea/modules/cache.NewContext failed: dial tcp [fd12:3456:789a:0:1::701]:6379: connect: no route to host

as here you can see the gitea is trying to connect to old address of redis master cluster.

Could you please help on this to resolve the issue

Gitea Version

1.20.3

Can you reproduce the bug on the Gitea demo site?

No

Log Gist

No response

Screenshots

No response

Git Version

No response

Operating System

azure k8s cluster

How are you running Gitea?

We are running gitea in azure kubernetes environment.
k8s server version : 1.25

Database

PostgreSQL

@Baitanik Baitanik changed the title Gitea version 1.20.3 gitea is not coming up after redis cluster pod ip is changed. gitea is not coming up after redis cluster pod ip is changed. Sep 4, 2023
@Baitanik Baitanik changed the title gitea is not coming up after redis cluster pod ip is changed. gitea(v 1.20.3) is not coming up after redis cluster pod ip is changed. Sep 4, 2023
@wxiaoguang
Copy link
Contributor

wxiaoguang commented Sep 4, 2023

IIRC, Gitea doesn't "internally maintain old Redis IP".

What's your "app.ini"? Could it be related to the helmchart, but not Gitea?

@wxiaoguang wxiaoguang added issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail and removed type/bug labels Sep 4, 2023
@Baitanik
Copy link
Author

Baitanik commented Sep 20, 2023

Sorry for late reply , for now we were disabling redis to proceed forward.
But the redis issue exists, if we enable it .. following is the app.ini


[ui]
DEFAULT_THEME = company
THEMES = company

[indexer]
ISSUE_INDEXER_TYPE = db

[server]
ROOT_URL = https://ngxpccm.ccm18a.eng.mobilephone-dev.net/ccmgitea
ENABLE_PPROF = false
SSH_PORT = 22
APP_DATA_PATH = /data
SSH_LISTEN_PORT = 2222
PROTOCOL = http
HTTP_PORT = 53000
START_SSH_SERVER = true
SSH_DOMAIN = company.ccm.monitoring-dev.ngxp.com
DOMAIN = company.ccm.monitoring-dev.ngxp.com

[database]
NAME = giteadb
DB_TYPE = postgres
HOST = company-data-document-database-pg:5432
SCHEMA = gitea

[service]
ENABLE_REVERSE_PROXY_AUTHENTICATION = true
DISABLE_REGISTRATION = false
ENABLE_REVERSE_PROXY_AUTO_REGISTRATION = true

[session]
PROVIDER_CONFIG = redis+cluster://:@company-ngxp-ccm-gitea-redis-cluster-headless.company-ngxp-ccm-gitea-v120.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
PROVIDER = redis

[repository]
ROOT = /data/git/gitea-repositories

[cache]
HOST = redis+cluster://:@company-ngxp-ccm-gitea-redis-cluster-headless.company-ngxp-ccm-gitea-v120.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&
ENABLED = true
ADAPTER = redis

[security]
INSTALL_LOCK = true
REVERSE_PROXY_AUTHENTICATION_USER = HTTP_IV_USER

[queue]
TYPE = redis
CONN_STR = redis+cluster://:@company-ngxp-ccm-gitea-redis-cluster-headless.company-ngxp-ccm-gitea-v120.svc.cluster.local:6379/0?pool_size=100&idle_timeout=180s&

[metrics]
ENABLED = false

[oauth2]

@wxiaoguang
Copy link
Contributor

wxiaoguang commented Sep 20, 2023

(I removed some sensitive values from your posted config)

TBH I have no idea about the problem at the moment.

Actually, Gitea doesn't "maintain old IP", the pods are managed by the k8s cluster. Maybe the helmchart maintainers could help? https://gitea.com/gitea/helm-chart/

@Chubukov-Aleksey
Copy link

Chubukov-Aleksey commented Oct 3, 2023

I have experienced this issue. It was redis nodes reporting wrong addresses. Whole redis cluster failed and nodes are unable to connect to each other. Cluster can be restored by meeting one node with other nodes. After that other nodes will discover nodes automatically. Finally, restart gitea pod.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
issue/needs-feedback For bugs, we need more details. For features, the feature must be described in more detail
Projects
None yet
Development

No branches or pull requests

3 participants