Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chart: Updating the chart without changing nginx image version break the ingress #5559

Closed
pierluigilenoci opened this issue May 15, 2020 · 9 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pierluigilenoci
Copy link

NGINX Ingress controller version:

nginx-ingress-1.17.1, v0.24.1
Old stable chart, I know 😞 but this is a copy of the ticket opened on the stable chart repo and unattended for the last 5 months helm/charts#19976 I hope that in the meantime the problem has been solved.

Kubernetes version (use kubectl version):

K8s v1.12.7
Helm v2.14.3

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

What happened:

I had Nginx-ingress installed (nginx-ingress-1.15.0, v0.24.1).
I updated the chart (nginx-ingress-1.17.1, v0.24.1).
The Nginx version was not updated by choice (due to this problem #4305).
The nginx pods have begun to generate these errors:

E0902 08:07:22.820980      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: Failed to list *v1.Pod: Unauthorized
E0902 08:07:22.821894      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:154: Failed to list *v1.Secret: Unauthorized
E0902 08:07:22.822561      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:153: Failed to list *v1.Service: Unauthorized
E0902 08:07:23.118202      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: Failed to list *v1beta1.Ingress: Unauthorized
E0902 08:07:23.278460      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:152: Failed to list *v1.Endpoints: Unauthorized
E0902 08:07:23.563842      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:155: Failed to list *v1.ConfigMap: Unauthorized
E0902 08:07:23.824365      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:156: Failed to list *v1.Pod: Unauthorized
E0902 08:07:23.824869      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:154: Failed to list *v1.Secret: Unauthorized
E0902 08:07:23.826023      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:153: Failed to list *v1.Service: Unauthorized
E0902 08:07:24.121045      12 reflector.go:126] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:176: Failed to list *v1beta1.Ingress: Unauthorized

And then I found this error in the pods:

MountVolume.SetUp failed for volume "nginx-ingress-token-2nf6f" : secrets "nginx-ingress-token-2nf6f" not found

Basically, the chart update changed the name of the secret. The pods had the old and stopped working. By restarting them they took the secret correctly and restarted to work properly.

What you expected to happen:

If I update only the chart I would like the old pods to continue to work or helm recreate new ones.

I found this way to do it: https://medium.com/@chunjenchen/helm-upgrade-is-not-recreating-pod-f1813ce8e55a

How to reproduce it:

Redo the same step that I did.

Anything else we need to know:

I appreciate your work!

/kind bug

@tuananhnguyen-ct
Copy link
Contributor

Can you clarify the upgrade from which version to which version, and which chart repo you were using (https://github.com/helm/charts/stable/ingress-nginx or https://github.com/kubernetes/ingress-nginx)?

@pierluigilenoci
Copy link
Author

pierluigilenoci commented May 19, 2020

@tuananhnguyen-ct as written in the issue: old stable chart, update from nginx-ingress-1.15.0 (v0.24.1) to nginx-ingress-1.17.1 (v0.24.1).

Thank you.

@tuananhnguyen-ct
Copy link
Contributor

Can you share the helm update commands you were using and its result? I checked and nothing in between 1.15 to 1.17 can trigger that error, so it's likely an issue with the deployment flow or your k8s cluster.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2020
@pierluigilenoci
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 10, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 9, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants