Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

Nginx Ingress controller default backend #1590

Closed
wernight opened this issue Aug 22, 2016 · 8 comments
Closed

Nginx Ingress controller default backend #1590

wernight opened this issue Aug 22, 2016 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@wernight
Copy link
Contributor

I noticed on https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx that you advice to explicitely specify the default backend. Why do that instead of using the Ingress defined default backend?

kind: Ingress
spec:
  backend:
    serviceName: default-http-backend
    servicePort: 80

Also, why is returning an HTTP 404 a requirement? Shouldn't that be up to the website to decide how to respond to unknown subdomains?

@aledbf
Copy link
Contributor

aledbf commented Aug 23, 2016

Why do that instead of using the Ingress defined default backend?

In the nginx ingress controller only is possible to have 1 "catch all" server. Using a default backend is possible be compatible with gce (for the default backend)
The issue appears if you define more than 1 ingress rule, both with backend. Which backend should be used?

Shouldn't that be up to the website to decide how to respond to unknown subdomains?

The default backend is used only if the service in the ingress rules does not have active endpoints.

@wernight
Copy link
Contributor Author

Agree on backend with multiple Ingress. It's actually an issue with the YML and should be addressed. I don't know what GCE Ingress does but I don't see a way for it to know either. Still I'd just pick one, log errors, and support the spec.backend additional to the argument. It seems at the moment that the --default-backend-service is required, when it should be optional.

Yes, but that shouldn't make returning 404 a requirement. For example, one may well decide to show the home page for any unknown subdomain. It's a good practice, but not a "must". However HTTP 200 on /healthz is a must unless you changed the Ingress liveness check. So I'd just update the documentation.

@wstrange
Copy link

wstrange commented Sep 6, 2016

For development having a default backend would be nice and would simplify deployment.

@bprashanth
Copy link

Is there a request here or is it just a doc clarity issue?

returning 404 is not a requirement. Even specifying a default backend is not strictly a requirement, but if the ingress doesn't specify one thorugh the spec.backend section, the ingress controller should redirect unknow urls somewhere (not hang or return random 5xx error code). To capture the 80% case it should redirect to a 404 page, but this should be configurable.

The GCE controller creates 1 L7 per ingress, so the default of that ingress is the default of the l7. This is the equivalent of creating 1 nginx controller pod per ingress from a uber ingress controller. I think both modes are equally valid, and in the case where we have all ingresses served from a single pod, we assume that the admin doesn't care too much about multi tenant isolation (or they would run the 1 ingress per pod mode).

@kamalmarhubi
Copy link

kamalmarhubi commented Nov 21, 2016

Is there a request here or is it just a doc clarity issue?

I think there is a doc clarity issue here. It's not clear from the README at all what the default backend is for: it's just listed as a controller.

@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants