diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index cfa44abda4..f1e8754290 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -36,7 +36,7 @@ Ingress collaborators may add "LGTM" (Looks Good To Me) or an equivalent comment Whether you are a user or contributor, official support channels include: -- GitHub issues: https://github.com/kubernetes/ingress/issues/new +- GitHub issues: https://github.com/kubernetes/ingress-nginx/issues/new - Slack: kubernetes-users room in the [Kubernetes Slack](http://slack.kubernetes.io/) - Email: [kubernetes-users](https://groups.google.com/forum/#!forum/kubernetes-users) mailing list diff --git a/README.md b/README.md index afc8ca919c..e52a401d48 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,6 @@ The GCE ingress controller was moved to [github.com/kubernetes/ingress-gce](http [![Build Status](https://travis-ci.org/kubernetes/ingress-nginx.svg?branch=master)](https://travis-ci.org/kubernetes/ingress-nginx) [![Coverage Status](https://coveralls.io/repos/github/kubernetes/ingress-nginx/badge.svg?branch=master)](https://coveralls.io/github/kubernetes/ingress-nginx?branch=master) [![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes/ingress-nginx)](https://goreportcard.com/report/github.com/kubernetes/ingress-nginx) -[![GoDoc](https://godoc.org/github.com/kubernetes/ingress-nginx?status.svg)](https://godoc.org/github.com/kubernetes/ingress-nginx) ## Description @@ -25,35 +24,33 @@ An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches th ## Contents -* [Conventions](#conventions) -* [Requirements](#requirements) -* [Contribute](#contribute) -* [Command line arguments](#command-line-arguments) -* [Deployment](#deployment) -* [HTTP](#http) -* [HTTPS](#https) - * [Default SSL Certificate](#default-ssl-certificate) - * [HTTPS enforcement](#server-side-https-enforcement) - * [HSTS](#http-strict-transport-security) - * [Kube-Lego](#automated-certificate-management-with-kube-lego) -* [Source IP address](#source-ip-address) -* [TCP Services](#exposing-tcp-services) -* [UDP Services](#exposing-udp-services) -* [Proxy Protocol](#proxy-protocol) -* [ModSecurity Web Application Firewall](#modsecurity-web-application-firewall) -* [Opentracing](#opentracing) -* [NGINX customization](configuration.md) -* [Custom errors](#custom-errors) -* [NGINX status page](#nginx-status-page) -* [Running multiple ingress controllers](#running-multiple-ingress-controllers) -* [Running on Cloudproviders](#running-on-cloudproviders) -* [Disabling NGINX ingress controller](#disabling-nginx-ingress-controller) -* [Log format](#log-format) -* [Local cluster](#local-cluster) -* [Debug & Troubleshooting](#debug--troubleshooting) -* [Limitations](#limitations) -* [Why endpoints and not services?](#why-endpoints-and-not-services) -* [NGINX Notes](#nginx-notes) +- [Conventions](#conventions) +- [Requirements](#requirements) +- [Deployment](deploy/README.md) +- [Command line arguments](docs/user-guide/cli-arguments.md) +- [Contribute](CONTRIBUTING.md) +- [TLS](docs/user-guide/tls.md) +- [Annotation ingress.class](#annotation-ingressclass) +- [Customizing NGINX](#customizing-nginx) + - [Custom NGINX configuration](docs/user-guide/configmap.md) + - [Annotations](docs/user-guide/annotations.md) +- [Source IP address](#source-ip-address) +- [Exposing TCP and UDP Services](docs/user-guide/exposing-tcp-udp-services.md) +- [Proxy Protocol](#proxy-protocol) +- [ModSecurity Web Application Firewall](docs/user-guide/modsecurity.md) +- [Opentracing](docs/user-guide/opentracing.md) +- [Custom errors](docs/user-guide/custom-errors.md) +- [NGINX status page](docs/user-guide/nginx-status-page.md) +- [Running multiple ingress controllers](#running-multiple-ingress-controllers) +- [Disabling NGINX ingress controller](#disabling-nginx-ingress-controller) +- [Retries in non-idempotent methods](#retries-in-non-idempotent-methods) +- [Log format](docs/user-guide/log-format.md) +- [Websockets](#websockets) +- [Optimizing TLS Time To First Byte (TTTFB)](#optimizing-tls-time-to-first-byte-tttfb) +- [Debug & Troubleshooting](docs/troubleshooting.md) +- [Limitations](#limitations) +- [Why endpoints and not services?](#why-endpoints-and-not-services) +- [External Articles](docs/user-guide/external-articles.md) ## Conventions @@ -63,338 +60,56 @@ and create the secret via `kubectl create secret tls ${CERT_NAME} --key ${KEY_FI ## Requirements -Default backend [404-server](https://github.com/kubernetes/ingress/tree/master/images/404-server) - -## Contribute - -See the [contributor guidelines](CONTRIBUTING.md) - -## Command line arguments - -```console -Usage of : - --alsologtostderr log to standard error as well as files - --apiserver-host string The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside a Kubernetes cluster and local discovery is attempted. - --configmap string Name of the ConfigMap that contains the custom configuration to use - --default-backend-service string Service used to serve a 404 page for the default backend. Takes the form - namespace/name. The controller uses the first node port of this Service for - the default backend. - --default-server-port int Default port to use for exposing the default server (catch all) (default 8181) - --default-ssl-certificate string Name of the secret - that contains a SSL certificate to be used as default for a HTTPS catch-all server - --disable-node-list Disable querying nodes. If --force-namespace-isolation is true, this should also be set. - --election-id string Election id to use for status update. (default "ingress-controller-leader") - --enable-ssl-passthrough Enable SSL passthrough feature. Default is disabled - --force-namespace-isolation Force namespace isolation. This flag is required to avoid the reference of secrets or - configmaps located in a different namespace than the specified in the flag --watch-namespace. - --health-check-path string Defines - the URL to be used as health check inside in the default server in NGINX. (default "/healthz") - --healthz-port int port for healthz endpoint. (default 10254) - --http-port int Indicates the port to use for HTTP traffic (default 80) - --https-port int Indicates the port to use for HTTPS traffic (default 443) - --ingress-class string Name of the ingress class to route through this controller. - --kubeconfig string Path to kubeconfig file with authorization and master location information. - --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) - --log_dir string If non-empty, write log files in this directory - --logtostderr log to standard error instead of files - --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) - --publish-service string Service fronting the ingress controllers. Takes the form - namespace/name. The controller will set the endpoint records on the - ingress objects to reflect those on the service. - --sort-backends Defines if backends and it's endpoints should be sorted - --ssl-passtrough-proxy-port int Default port to use internally for SSL when SSL Passthgough is enabled (default 442) - --status-port int Indicates the TCP port to use for exposing the nginx status page (default 18080) - --stderrthreshold severity logs at or above this threshold go to stderr (default 2) - --sync-period duration Relist and confirm cloud resources this often. Default is 10 minutes (default 10m0s) - --tcp-services-configmap string Name of the ConfigMap that contains the definition of the TCP services to expose. - The key in the map indicates the external port to be used. The value is the name of the - service with the format namespace/serviceName and the port of the service could be a - number of the name of the port. - The ports 80 and 443 are not allowed as external ports. This ports are reserved for the backend - --udp-services-configmap string Name of the ConfigMap that contains the definition of the UDP services to expose. - The key in the map indicates the external port to be used. The value is the name of the - service with the format namespace/serviceName and the port of the service could be a - number of the name of the port. - --update-status Indicates if the - ingress controller should update the Ingress status IP/hostname. Default is true (default true) - --update-status-on-shutdown Indicates if the - ingress controller should update the Ingress status IP/hostname when the controller - is being stopped. Default is true (default true) - -v, --v Level log level for V logs - --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging - --watch-namespace string Namespace to watch for Ingress. Default is to watch all namespaces -``` - -## Deployment - -First create a default backend and it's corresponding service: - -```console -kubectl create -f examples/default-backend.yaml -``` - -Follow the [example-deployment](examples/deployment/README.md) steps to deploy nginx-ingress-controller in Kubernetes cluster (you may prefer other type of workloads, like Daemonset, in production environment). -Loadbalancers are created via a ReplicationController or Daemonset: - -## HTTP - -First we need to deploy some application to publish. To keep this simple we will use the [echoheaders app](https://github.com/kubernetes/contrib/blob/master/ingress/echoheaders/echo-app.yaml) that just returns information about the http request as output - -```console -kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.8 --replicas=1 --port=8080 -``` - -Now we expose the same application in two different services (so we can create different Ingress rules) - -```console -kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x -kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y -``` - -Next we create a couple of Ingress rules - -```console -kubectl create -f examples/ingress.yaml -``` - -we check that ingress rules are defined: - -```console -$ kubectl get ing -NAME RULE BACKEND ADDRESS -echomap - - foo.bar.com - /foo echoheaders-x:80 - bar.baz.com - /bar echoheaders-y:80 - /foo echoheaders-x:80 -``` +The default backend is a service of handling all url paths and hosts the nginx controller doesn't understand, i.e., all the request that are not mapped with an Ingress +Basically a default backend exposes two URLs: -Before the deploy of the Ingress controller we need a default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) +- `/healthz` that returns 200 +- `/` that returns 404 -```console -kubectl create -f examples/default-backend.yaml -kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend -``` - -Check NGINX it is running with the defined Ingress rules: - -```console -$ LBIP=$(kubectl get node `kubectl get po -l name=nginx-ingress-lb --template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template '{{range $i, $n := .status.addresses}}{{if eq $n.type "ExternalIP"}}{{$n.address}}{{end}}{{end}}') -$ curl $LBIP/foo -H 'Host: foo.bar.com' -``` +The location [404-server](https://github.com/kubernetes/ingress-nginx/tree/master/images/404-server) contains the image of the default backend and [custom-error-pages](https://github.com/kubernetes/ingress-nginx/tree/master/images/custom-error-pages) an example that shows how it is possible to customize -## HTTPS +## Annotation ingress.class -You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg: +If you have multiple Ingress controllers in a single cluster, you can pick one by specifying the `ingress.class` +annotation, eg creating an Ingress with an annotation like ```yaml -apiVersion: v1 -data: - tls.crt: base64 encoded cert - tls.key: base64 encoded key -kind: Secret metadata: - name: foo-secret - namespace: default -type: kubernetes.io/tls + name: foo + annotations: + kubernetes.io/ingress.class: "gce" ``` -Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS: +will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like ```yaml -apiVersion: extensions/v1beta1 -kind: Ingress metadata: - name: no-rules-map -spec: - tls: - secretName: foo-secret - backend: - serviceName: s1 - servicePort: 80 -``` - -Please follow [PREREQUISITES](examples/PREREQUISITES.md) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate. - -Check the [example](examples/tls-termination/nginx) - -### Default SSL Certificate - -NGINX provides the option [server name _](http://nginx.org/en/docs/http/server_names.html) as a catch-all in case of requests that do not match one of the configured server names. This configuration works without issues for HTTP traffic. In case of HTTPS, NGINX requires a certificate. For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret behind this flag contains the default certificate to be used in the mentioned case. If this flag is not provided NGINX will use a self signed certificate. - -Running without the flag `--default-ssl-certificate`: - -```console -$ curl -v https://10.2.78.7:443 -k -* Rebuilt URL to: https://10.2.78.7:443/ -* Trying 10.2.78.4... -* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) -* ALPN, offering http/1.1 -* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH -* successfully set certificate verify locations: -* CAfile: /etc/ssl/certs/ca-certificates.crt - CApath: /etc/ssl/certs -* TLSv1.2 (OUT), TLS header, Certificate Status (22): -* TLSv1.2 (OUT), TLS handshake, Client hello (1): -* TLSv1.2 (IN), TLS handshake, Server hello (2): -* TLSv1.2 (IN), TLS handshake, Certificate (11): -* TLSv1.2 (IN), TLS handshake, Server key exchange (12): -* TLSv1.2 (IN), TLS handshake, Server finished (14): -* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): -* TLSv1.2 (OUT), TLS change cipher, Client hello (1): -* TLSv1.2 (OUT), TLS handshake, Finished (20): -* TLSv1.2 (IN), TLS change cipher, Client hello (1): -* TLSv1.2 (IN), TLS handshake, Finished (20): -* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 -* ALPN, server accepted to use http/1.1 -* Server certificate: -* subject: CN=foo.bar.com -* start date: Apr 13 00:50:56 2016 GMT -* expire date: Apr 13 00:50:56 2017 GMT -* issuer: CN=foo.bar.com -* SSL certificate verify result: self signed certificate (18), continuing anyway. -> GET / HTTP/1.1 -> Host: 10.2.78.7 -> User-Agent: curl/7.47.1 -> Accept: */* -> -< HTTP/1.1 404 Not Found -< Server: nginx/1.11.1 -< Date: Thu, 21 Jul 2016 15:38:46 GMT -< Content-Type: text/html -< Transfer-Encoding: chunked -< Connection: keep-alive -< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload -< -The page you're looking for could not be found. - -* Connection #0 to host 10.2.78.7 left intact + name: foo + annotations: + kubernetes.io/ingress.class: "nginx" ``` -Specifying `--default-ssl-certificate=default/foo-tls`: - -```console -core@localhost ~ $ curl -v https://10.2.78.7:443 -k -* Rebuilt URL to: https://10.2.78.7:443/ -* Trying 10.2.78.7... -* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) -* ALPN, offering http/1.1 -* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH -* successfully set certificate verify locations: -* CAfile: /etc/ssl/certs/ca-certificates.crt - CApath: /etc/ssl/certs -* TLSv1.2 (OUT), TLS header, Certificate Status (22): -* TLSv1.2 (OUT), TLS handshake, Client hello (1): -* TLSv1.2 (IN), TLS handshake, Server hello (2): -* TLSv1.2 (IN), TLS handshake, Certificate (11): -* TLSv1.2 (IN), TLS handshake, Server key exchange (12): -* TLSv1.2 (IN), TLS handshake, Server finished (14): -* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): -* TLSv1.2 (OUT), TLS change cipher, Client hello (1): -* TLSv1.2 (OUT), TLS handshake, Finished (20): -* TLSv1.2 (IN), TLS change cipher, Client hello (1): -* TLSv1.2 (IN), TLS handshake, Finished (20): -* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 -* ALPN, server accepted to use http/1.1 -* Server certificate: -* subject: CN=foo.bar.com -* start date: Apr 13 00:50:56 2016 GMT -* expire date: Apr 13 00:50:56 2017 GMT -* issuer: CN=foo.bar.com -* SSL certificate verify result: self signed certificate (18), continuing anyway. -> GET / HTTP/1.1 -> Host: 10.2.78.7 -> User-Agent: curl/7.47.1 -> Accept: */* -> -< HTTP/1.1 404 Not Found -< Server: nginx/1.11.1 -< Date: Mon, 18 Jul 2016 21:02:59 GMT -< Content-Type: text/html -< Transfer-Encoding: chunked -< Connection: keep-alive -< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload -< -The page you're looking for could not be found. - -* Connection #0 to host 10.2.78.7 left intact -``` - -### Server-side HTTPS enforcement - -By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the NGINX config map. - -To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. - -### HTTP Strict Transport Security - -HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. +will target the nginx controller, forcing the GCE controller to ignore it. -By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule. +__Note__: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress. -To disable this behavior use `hsts=false` in the NGINX config map. +### Customizing NGINX -### Automated Certificate Management with Kube-Lego +There are three ways to customize NGINX: -[Kube-Lego] automatically requests missing or expired certificates from [Let's Encrypt] by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation: - -```console -kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true" -``` - -To setup Kube-Lego you can take a look at this [full example]. The first -version to fully support Kube-Lego is nginx Ingress controller 0.8. - -[full example]:https://github.com/jetstack/kube-lego/tree/master/examples -[Kube-Lego]:https://github.com/jetstack/kube-lego -[Let's Encrypt]:https://letsencrypt.org +1. [ConfigMap](docs/user-guide/configmap.md): using a Configmap to set global configurations in NGINX. +2. [Annotations](docs/user-guide/annotations.md): use this if you want a specific configuration for a particular Ingress rule. +3. [Custom template](docs/user-guide/custom-template.md): when more specific settings are required, like [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache), adjust [listen](http://nginx.org/en/docs/http/ngx_http_core_module.html#listen) options as `rcvbuf` or when is not possible to change the configuration through the ConfigMap. ## Source IP address -By default NGINX uses the content of the header `X-Forwarded-For` as the source of truth to get information about the client IP address. This works without issues in L7 **if we configure the setting `proxy-real-ip-cidr`** with the correct information of the IP/network address of the external load balancer. -If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. This allows NGINX to avoid the spoofing of the header. -Another option is to enable proxy protocol using `use-proxy-protocol: "true"`. -In this mode NGINX do not uses the content of the header to get the source IP address of the connection. - -## Exposing TCP services - -Ingress does not support TCP services (yet). For this reason this Ingress controller uses the flag `--tcp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `::[PROXY]:[PROXY]` -It is possible to use a number or the name of the port. The two last fields are optional. Adding `PROXY` in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service (https://www.nginx.com/resources/admin-guide/proxy-protocol/). - -The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000` - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: tcp-configmap-example -data: - 9000: "default/example-go:8080" -``` - -Please check the [tcp services](examples/tcp/README.md) example - -## Exposing UDP services - -Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/). - -Ingress does not support UDP services (yet). For this reason this Ingress controller uses the flag `--udp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `:` -It is possible to use a number or the name of the port. - -The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53` +By default NGINX uses the content of the header `X-Forwarded-For` as the source of truth to get information about the client IP address. This works without issues in L7 **if we configure the setting `proxy-real-ip-cidr`** with the correct information of the IP/network address of trusted external load balancer. -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: udp-configmap-example -data: - 53: "kube-system/kube-dns:53" -``` +If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. +Another option is to enable proxy protocol using `use-proxy-protocol: "true"`. -Please check the [udp services](examples/udp/README.md) example +In this mode NGINX do not uses the content of the header to get the source IP address of the connection. ## Proxy Protocol @@ -402,171 +117,38 @@ If you are using a L4 proxy to forward the traffic to the NGINX pods and termina Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol. -Please check the [proxy-protocol](examples/proxy-protocol/) example - -## ModSecurity Web Application Firewall - -ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analys… https://www.modsecurity.org - -The [ModSecurity-nginx](https://github.com/SpiderLabs/ModSecurity-nginx) connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). - -The default modsecurity configuration file is located in `/etc/nginx/modsecurity/modsecurity.conf`. This is the only file located in this directory and it contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. -To enable the modsecurity feature we need to specify `enable-modsecurity: "true"` in the configuration configmap. - -The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. -The directory `/etc/nginx/owasp-modsecurity-crs` contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository. -Using `enable-owasp-modsecurity-crs: "true"` we enable the use of the this rules. - -## Opentracing - -Using the third party module [rnburn/nginx-opentracing](https://github.com/rnburn/nginx-opentracing) the NGINX ingress controller can configure NGINX to enable [OpenTracing](http://opentracing.io) instrumentation. -By default this feature is disabled. - -To enable the instrumentation we just need to enable the instrumentation in the configuration configmap and set the host where we should send the traces. - -In the [aledbf/zipkin-js-example](https://github.com/aledbf/zipkin-js-example) github repository is possible to see a dockerized version of zipkin-js-example with the required Kubernetes descriptors. -To install the example and the zipkin collector we just need to run: - -``` -kubectl create -f https://raw.githubusercontent.com/aledbf/zipkin-js-example/kubernetes/kubernetes/zipkin.yaml -kubectl create -f https://raw.githubusercontent.com/aledbf/zipkin-js-example/kubernetes/kubernetes/deployment.yaml -``` - -Also we need to configure the NGINX controller configmap with the required values: - -```yaml -apiVersion: v1 -data: - enable-opentracing: "true" - zipkin-collector-host: zipkin.default.svc.cluster.local -kind: ConfigMap -metadata: - labels: - k8s-app: nginx-ingress-controller - name: nginx-custom-configuration -``` - -Using curl we can generate some traces: - -```console -$ curl -v http://$(minikube ip)/api -H 'Host: zipkin-js-example' -$ curl -v http://$(minikube ip)/api -H 'Host: zipkin-js-example' -``` - -In the zipkin inteface we can see the details: - -![zipkin screenshot](docs/images/zipkin-demo.png "zipkin collector screenshot") - -### Custom errors - -In case of an error in a request the body of the response is obtained from the `default backend`. -Each request to the default backend includes two headers: - -- `X-Code` indicates the HTTP code to be returned to the client. -- `X-Format` the value of the `Accept` header. - -**Important:** the custom backend must return the correct HTTP status code to be returned. NGINX do not changes the reponse from the custom default backend. - -Using this two headers is possible to use a custom backend service like [this one](https://github.com/kubernetes/ingress/tree/master/examples/customization/custom-errors/nginx) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/customization/custom-errors/README.md) - -NGINX sends aditional headers that can be used to build custom response: - -- X-Original-URI -- X-Namespace -- X-Ingress-Name -- X-Service-Name - -### NGINX status page - -The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`. -This controller provides an alternative to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module. -To use this module just provide a config map with the key `enable-vts-status=true`. The URL is exposed in the port 18080. -Please check the example `example/rc-default.yaml` - -![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter") - -To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json` - ### Running multiple ingress controllers If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress, you need to specify the annotation `kubernetes.io/ingress.class: "nginx"` in all ingresses that you would like this controller to claim. Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying the wrong value will result in all ingress controllers ignoring the ingress. Multiple ingress controllers running in the same cluster was not supported in Kubernetes versions < 1.3. -### Running on Cloudproviders +### Websockets -If you're running this ingress controller on a cloudprovider, you should assume the provider also has a native Ingress controller and specify the ingress.class annotation as indicated in this section. -In addition to this, you will need to add a firewall rule for each port this controller is listening on, i.e :80 and :443. +Support for websockets is provided by NGINX out of the box. No special configuration required. -### Disabling NGINX ingress controller +The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`. -Setting the annotation `kubernetes.io/ingress.class` to any value other than "nginx" or the empty string, will force the NGINX Ingress controller to ignore your Ingress. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller. +The default value of this settings is `60 seconds`. +A more adequate value to support websockets is a value higher than one hour (`3600`). -### Log format +### Optimizing TLS Time To First Byte (TTTFB) -The default configuration uses a custom logging format to add additional information about upstreams +NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. -``` - log_format upstreaminfo '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - ' - '[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' - '$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; -``` +This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). +The default value in the Ingress controller is `4k` (NGINX default is `16k`). -Sources: - - [upstream variables](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables) - - [embedded variables](http://nginx.org/en/docs/http/ngx_http_core_module.html#variables) - -Description: -- `$proxy_protocol_addr`: if PROXY protocol is enabled -- `$remote_addr`: if PROXY protocol is disabled (default) -- `$proxy_add_x_forwarded_for`: the `X-Forwarded-For` client request header field with the $remote_addr variable appended to it, separated by a comma -- `$remote_user`: user name supplied with the Basic authentication -- `$time_local`: local time in the Common Log Format -- `$request`: full original request line -- `$status`: response status -- `$body_bytes_sent`: number of bytes sent to a client, not counting the response header -- `$http_referer`: value of the Referer header -- `$http_user_agent`: value of User-Agent header -- `$request_length`: request length (including request line, header, and request body) -- `$request_time`: time elapsed since the first bytes were read from the client -- `$proxy_upstream_name`: name of the upstream. The format is `upstream---` -- `$upstream_addr`: keeps the IP address and port, or the path to the UNIX-domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas -- `$upstream_response_length`: keeps the length of the response obtained from the upstream server -- `$upstream_response_time`: keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution -- `$upstream_status`: keeps status code of the response obtained from the upstream server - -### Local cluster - -Using [`hack/local-up-cluster.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh) is possible to start a local kubernetes cluster consisting of a master and a single node. Please read [running-locally.md](https://github.com/kubernetes/community/blob/master/contributors/devel/running-locally.md) for more details. - -Use of `hostNetwork: true` in the ingress controller is required to falls back at localhost:8080 for the apiserver if every other client creation check fails (eg: service account not present, kubeconfig doesn't exist, no master env vars...) - -### Debug & Troubleshooting - -Using the flag `--v=XX` it is possible to increase the level of logging. -In particular: -- `--v=2` shows details using `diff` about the changes in the configuration in nginx - -```console -I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf -I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000 -+++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000 -@@ -235,7 +235,6 @@ - - upstream default-echoheadersx { - least_conn; -- server 10.2.112.124:5000; - server 10.2.208.50:5000; - - } -I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading... -``` +### Retries in non-idempotent methods + +Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. +The previous behavior can be restored using `retry-non-idempotent=true` in the configuration ConfigMap. + +### Disabling NGINX ingress controller -- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format -- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html) +Setting the annotation `kubernetes.io/ingress.class` to any value other than "nginx" or the empty string, will force the NGINX Ingress controller to ignore your Ingress. -Peruse the [FAQ section](docs/faq/README.md) -Ask on one of the [user-support channels](CONTRIBUTING.md#support-channels) +Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller. ### Limitations @@ -575,10 +157,3 @@ Ask on one of the [user-support channels](CONTRIBUTING.md#support-channels) ### Why endpoints and not services The NGINX ingress controller does not uses [Services](http://kubernetes.io/docs/user-guide/services) to route traffic to the pods. Instead it uses the Endpoints API in order to bypass [kube-proxy](http://kubernetes.io/docs/admin/kube-proxy/) to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT. - -### NGINX notes - -Since `gcr.io/google_containers/nginx-slim:0.8` NGINX contains the next patches: -- Dynamic TLS record size [nginx__dynamic_tls_records.patch](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/) -NGINX provides the parameter `ssl_buffer_size` to adjust the size of the buffer. Default value in NGINX is 16KB. The ingress controller changes the default to 4KB. This improves the [TLS Time To First Byte (TTTFB)](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) but the size is fixed. This patches adapts the size of the buffer to the content is being served helping to improve the perceived latency. -- [HTTP/2 header compression](https://raw.githubusercontent.com/cloudflare/sslconfig/master/patches/nginx_http2_hpack.patch) diff --git a/configuration.md b/configuration.md deleted file mode 100644 index d463f06973..0000000000 --- a/configuration.md +++ /dev/null @@ -1,659 +0,0 @@ -## Contents - -* [Customizing NGINX](#customizing-nginx) -* [Custom NGINX configuration](#custom-nginx-configuration) -* [Custom NGINX template](#custom-nginx-template) -* [Annotations](#annotations) -* [Custom NGINX upstream checks](#custom-nginx-upstream-checks) -* [Custom NGINX upstream hashing](#custom-nginx-upstream-hashing) -* [Authentication](#authentication) -* [Rewrite](#rewrite) -* [Rate limiting](#rate-limiting) -* [SSL Passthrough](#ssl-passthrough) -* [Secure backends](#secure-backends) -* [Server-side HTTPS enforcement through redirect](#server-side-https-enforcement-through-redirect) -* [Whitelist source range](#whitelist-source-range) -* [Allowed parameters in configuration ConfigMap](#allowed-parameters-in-configuration-configmap) -* [Default configuration options](#default-configuration-options) -* [Websockets](#websockets) -* [Optimizing TLS Time To First Byte (TTTFB)](#optimizing-tls-time-to-first-byte-tttfb) -* [Retries in non-idempotent methods](#retries-in-non-idempotent-methods) -* [Custom max body size](#custom-max-body-size) - -### Customizing NGINX - -There are 3 ways to customize NGINX: - -1. [ConfigMap](#allowed-parameters-in-configuration-configmap): create a stand alone ConfigMap, use this if you want a different global configuration. -2. [annotations](#annotations): use this if you want a specific configuration for the site defined in the Ingress rule. -3. custom template: when more specific settings are required, like [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache), custom [log_format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format), adjust [listen](http://nginx.org/en/docs/http/ngx_http_core_module.html#listen) options as `rcvbuf` or when is not possible to change an through the ConfigMap. - -#### Custom NGINX configuration - -It is possible to customize the defaults in NGINX using a ConfigMap. -Please check the [custom configuration](../../examples/customization/custom-configuration/nginx/README.md) example. - -#### Annotations - -The following annotations are supported: - -|Name |type| -|---------------------------|------| -|[ingress.kubernetes.io/add-base-url](#rewrite)|true or false| -|[ingress.kubernetes.io/app-root](#rewrite)|string| -|[ingress.kubernetes.io/affinity](#session-affinity)|cookie| -|[ingress.kubernetes.io/auth-realm](#authentication)|string| -|[ingress.kubernetes.io/auth-secret](#authentication)|string| -|[ingress.kubernetes.io/auth-type](#authentication)|basic or digest| -|[ingress.kubernetes.io/auth-url](#external-authentication)|string| -|[ingress.kubernetes.io/auth-tls-secret](#certificate-authentication)|string| -|[ingress.kubernetes.io/auth-tls-verify-depth](#certificate-authentication)|number| -|[ingress.kubernetes.io/auth-tls-verify-client](#certificate-authentication)|string| -|[ingress.kubernetes.io/auth-tls-error-page](#certificate-authentication)|string| -|[ingress.kubernetes.io/base-url-scheme](#rewrite)|string| -|[ingress.kubernetes.io/client-body-buffer-size](#client-body-buffer-size)|string| -|[ingress.kubernetes.io/configuration-snippet](#configuration-snippet)|string| -|[ingress.kubernetes.io/default-backend](#default-backend)|string| -|[ingress.kubernetes.io/enable-cors](#enable-cors)|true or false| -|[ingress.kubernetes.io/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false| -|[ingress.kubernetes.io/from-to-www-redirect](#redirect-from-to-www)|true or false| -|[ingress.kubernetes.io/limit-connections](#rate-limiting)|number| -|[ingress.kubernetes.io/limit-rps](#rate-limiting)|number| -|[ingress.kubernetes.io/proxy-body-size](#custom-max-body-size)|string| -|[ingress.kubernetes.io/proxy-connect-timeout](#custom-timeouts)|number| -|[ingress.kubernetes.io/proxy-send-timeout](#custom-timeouts)|number| -|[ingress.kubernetes.io/proxy-read-timeout](#custom-timeouts)|number| -|[ingress.kubernetes.io/proxy-request-buffering](#custom-timeouts)|string| -|[ingress.kubernetes.io/rewrite-target](#rewrite)|URI| -|[ingress.kubernetes.io/secure-backends](#secure-backends)|true or false| -|[ingress.kubernetes.io/server-alias](#server-alias)|string| -|[ingress.kubernetes.io/server-snippet](#server-snippet)|string| -|[ingress.kubernetes.io/service-upstream](#service-upstream)|true or false| -|[ingress.kubernetes.io/session-cookie-name](#cookie-affinity)|string| -|[ingress.kubernetes.io/session-cookie-hash](#cookie-affinity)|string| -|[ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false| -|[ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|true or false| -|[ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number| -|[ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number| -|[ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string| -|[ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR| - -#### Custom NGINX template - -The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. Mounting a volume is possible to use a custom version. -Use the [custom-template](../../examples/customization/custom-template/README.md) example as a guide. - -**Please note the template is tied to the Go code. Do not change names in the variable `$cfg`.** - -For more information about the template syntax please check the [Go template package](https://golang.org/pkg/text/template/). -In addition to the built-in functions provided by the Go package the following functions are also available: - - - empty: returns true if the specified parameter (string) is empty - - contains: [strings.Contains](https://golang.org/pkg/strings/#Contains) - - hasPrefix: [strings.HasPrefix](https://golang.org/pkg/strings/#HasPrefix) - - hasSuffix: [strings.HasSuffix](https://golang.org/pkg/strings/#HasSuffix) - - toUpper: [strings.ToUpper](https://golang.org/pkg/strings/#ToUpper) - - toLower: [strings.ToLower](https://golang.org/pkg/strings/#ToLower) - - buildLocation: helps to build the NGINX Location section in each server - - buildProxyPass: builds the reverse proxy configuration - - buildRateLimitZones: helps to build all the required rate limit zones - - buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation - -### Custom NGINX upstream checks - -NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enable the configuration of each server in the upstream. The Ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` and `upstream-fail-timeout` in the NGINX ConfigMap or in a particular Ingress rule. `upstream-max-fails` defaults to 0. This means NGINX will respect the container's `readinessProbe` if it is defined. If there is no probe and no values for `upstream-max-fails` NGINX will continue to send traffic to the container. - -**With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.** - -To use custom values in an Ingress rule define these annotations: - -`ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the `upstream-fail-timeout` parameter to consider the server unavailable. - -`ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable. - -In NGINX, backend server pools are called "[upstreams](http://nginx.org/en/docs/http/ngx_http_upstream_module.html)". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined. - -**Important:** All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers. - -Please check the [custom upstream check](../../examples/customization/custom-upstream-check/README.md) example. - -### Custom NGINX upstream hashing - -NGINX supports load balancing by client-server mapping based on [consistent hashing](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](http://www.last.fm/user/RJ/journal/2007/04/10/392555/) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. - -To enable consistent hashing for a backend: - -`ingress.kubernetes.io/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example `ingress.kubernetes.io/upstream-hash-by: "$request_uri"` to consistently hash upstream requests by the current request URI. - -### Authentication - -Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key `auth`. - -The annotations are: -``` -ingress.kubernetes.io/auth-type: [basic|digest] -``` - -Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617). - -``` -ingress.kubernetes.io/auth-secret: secretName -``` - -The name of the secret that contains the usernames and passwords with access to the `path`s defined in the Ingress Rule. -The secret must be created in the same namespace as the Ingress rule. - -``` -ingress.kubernetes.io/auth-realm: "realm string" -``` - -Please check the [auth](/examples/auth/basic/nginx/README.md) example. - -### Certificate Authentication - -It's possible to enable Certificate-Based Authentication (Mutual Authentication) using additional annotations in Ingress Rule. - -The annotations are: -``` -ingress.kubernetes.io/auth-tls-secret: secretName -``` - -The name of the secret that contains the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this ingress. It's composed of namespace/secretName. - -``` -ingress.kubernetes.io/auth-tls-verify-depth -``` - -The validation depth between the provided client certificate and the Certification Authority chain. - -``` -ingress.kubernetes.io/auth-tls-verify-client -``` - -Enables verification of client certificates. - -``` -ingress.kubernetes.io/auth-tls-error-page -``` - -The URL/Page that user should be redirected in case of a Certificate Authentication Error - -Please check the [tls-auth](/examples/auth/client-certs/nginx/README.md) example. - -### Configuration snippet - -Using this annotation you can add additional configuration to the NGINX location. For example: - -``` -ingress.kubernetes.io/configuration-snippet: | - more_set_headers "Request-Id: $request_id"; -``` -### Default Backend - -The ingress controller requires a default backend. This service is handle the response when the service in the Ingress rule does not have endpoints. -This is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation `ingress.kubernetes.io/default-backend: ` to specify a custom default backend. - -### Enable CORS - -To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation `ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server location enabling this functionality. -For more information please check https://enable-cors.org/server_nginx.html - -### Server Alias - -To add Server Aliases to an Ingress rule add the annotation `ingress.kubernetes.io/server-alias: ""`. -This will create a server with two server_names (hostname and alias) - -*Note:* A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias -annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created -the new server configuration will take place over the alias configuration. - -For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name - -### Server snippet - -Using the annotation `ingress.kubernetes.io/server-snippet` it is possible to add custom configuration in the server configuration block. - -``` -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: -annotations: -ingress.kubernetes.io/server-snippet: | -set $agentflag 0; - -if ($http_user_agent ~* "(Mobile)" ){ - set $agentflag 1; -} - -if ( $agentflag = 1 ) { - return 301 https://m.example.com; -} -``` - -**Important:** This annotation can be used only once per host - -### Client Body Buffer Size - -Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, -the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. -This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is -applied to each location provided in the ingress rule. - -*Note:* The annotation value must be given in a valid format otherwise the -For example to set the client-body-buffer-size the following can be done: -* `ingress.kubernetes.io/client-body-buffer-size: "1000"` # 1000 bytes -* `ingress.kubernetes.io/client-body-buffer-size: 1k` # 1 kilobyte -* `ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte -* `ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte -* `ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte - -For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size - -### External Authentication - -To use an existing service that provides authentication the Ingress rule can be annotated with `ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent. -Additionally it is possible to set `ingress.kubernetes.io/auth-method` to specify the HTTP method to use (GET or POST). - -``` -ingress.kubernetes.io/auth-url: "URL to the authentication service" -``` - -Please check the [external-auth](/examples/auth/external-auth/nginx/README.md) example. - -### Rewrite - -In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. -Set the annotation `ingress.kubernetes.io/rewrite-target` to the path expected by the service. - -If the application contains relative links it is possible to add an additional annotation `ingress.kubernetes.io/add-base-url` that will prepend a [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) in the header of the returned HTML from the backend. - -If the scheme of [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) need to be specific, set the annotation `ingress.kubernetes.io/base-url-scheme` to the scheme such as `http` and `https`. - -If the Application Root is exposed in a different path and needs to be redirected, set the annotation `ingress.kubernetes.io/app-root` to redirect requests for `/`. - -Please check the [rewrite](/examples/rewrite/nginx/README.md) example. - -### Rate limiting - -The annotations `ingress.kubernetes.io/limit-connections`, `ingress.kubernetes.io/limit-rps`, and `ingress.kubernetes.io/limit-rpm` define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus). - -`ingress.kubernetes.io/limit-connections`: number of concurrent connections allowed from a single IP address. - -`ingress.kubernetes.io/limit-rps`: number of connections that may be accepted from a given IP each second. - -`ingress.kubernetes.io/limit-rpm`: number of connections that may be accepted from a given IP each minute. - -You can specify the client IP source ranges to be excluded from rate-limiting through the `ingress.kubernetes.io/limit-whitelist` annotation. The value is a comma separated list of CIDRs. - -If you specify multiple annotations in a single Ingress rule, `limit-rpm`, and then `limit-rps` takes precedence. - -The annotation `ingress.kubernetes.io/limit-rate`, `ingress.kubernetes.io/limit-rate-after` define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. - -`ingress.kubernetes.io/limit-rate-after`: sets the initial amount after which the further transmission of a response to a client will be rate limited. - -`ingress.kubernetes.io/limit-rate`: rate of request that accepted from a client each second. - -To configure this setting globally for all Ingress rules, the `limit-rate-after` and `limit-rate` value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting. - -### SSL Passthrough - -The annotation `ingress.kubernetes.io/ssl-passthrough` allows to configure TLS termination in the pod and not in NGINX. - -**Important:** -- Using the annotation `ingress.kubernetes.io/ssl-passthrough` invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP). -- The use of this annotation requires the flag `--enable-ssl-passthrough` (By default it is disabled) - -### Secure backends - -By default NGINX uses `http` to reach the services. Adding the annotation `ingress.kubernetes.io/secure-backends: "true"` in the Ingress rule changes the protocol to `https`. - -### Service Upstream - -By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue [#257](https://github.com/kubernetes/ingress/issues/257). - -#### Known Issues - -If the `service-upstream` annotation is specified the following things should be taken into consideration: - -* Sticky Sessions will not work as only round-robin load balancing is supported. -* The `proxy_next_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream. - -### Server-side HTTPS enforcement through redirect - -By default the controller redirects (301) to `HTTPS` if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use `ssl-redirect: "false"` in the NGINX config map. - -To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. - -When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to `HTTPS` even when there is not TLS cert available. This can be achieved by using the `ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource. - -### Redirect from to www - -In some scenarios is required to redirect from `www.domain.com` to `domain.com` or viceversa. -To enable this feature use the annotation `ingress.kubernetes.io/from-to-www-redirect: "true"` - -**Important:** -If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted. - - -### Whitelist source range - -You can specify the allowed client IP source ranges through the `ingress.kubernetes.io/whitelist-source-range` annotation. The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), e.g. `10.0.0.0/24,172.10.0.1`. - -To configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the NGINX ConfigMap. - -*Note:* Adding an annotation to an Ingress rule overrides any global restriction. - -Please check the [whitelist](/examples/affinity/cookie/nginx/README.md) example. - -### Session Affinity - -The annotation `ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. -The only affinity type available for NGINX is `cookie`. - -### Cookie affinity -If you use the ``cookie`` type you can also specify the name of the cookie that will be used to route the requests with the annotation `ingress.kubernetes.io/session-cookie-name`. The default is to create a cookie named 'route'. - -In case of NGINX the annotation `ingress.kubernetes.io/session-cookie-hash` defines which algorithm will be used to 'hash' the used upstream. Default value is `md5` and possible values are `md5`, `sha1` and `index`. -The `index` option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to! - -In NGINX this feature is implemented by the third party module [nginx-sticky-module-ng](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng). The workflow used to define which upstream server will be used is explained [here](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/raw/08a395c66e425540982c00482f55034e1fee67b6/docs/sticky.pdf) - -### Custom timeouts - -Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. -In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: - -- ingress.kubernetes.io/proxy-connect-timeout -- ingress.kubernetes.io/proxy-send-timeout -- ingress.kubernetes.io/proxy-read-timeout -- ingress.kubernetes.io/proxy-request-buffering - - -### **Allowed parameters in configuration ConfigMap** - -**proxy-body-size:** Sets the maximum allowed size of the client request body. See NGINX [client_max_body_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size). - -**custom-http-errors:** Enables which HTTP codes should be passed for processing with the [error_page directive](http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page). -Setting at least one code also enables [proxy_intercept_errors](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors) which are required to process error_page. - -Example usage: `custom-http-errors: 404,415` - -**disable-access-log:** Disables the Access Log from the entire Ingress Controller. This is 'false' by default. - -**access-log-path:** Access log path. Goes to '/var/log/nginx/access.log' by default. http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log - -**error-log-path:** Error log path. Goes to '/var/log/nginx/error.log' by default. http://nginx.org/en/docs/ngx_core_module.html#error_log - -**enable-modsecurity:** enables the modsecurity module for NGINX -By default this is disabled - -**enable-owasp-modsecurity-crs:** enables the OWASP ModSecurity Core Rule Set (CRS) -By default this is disabled - -**disable-ipv6:** Disable listening on IPV6. This is 'false' by default. - -**enable-dynamic-tls-records:** Enables dynamically sized TLS records to improve time-to-first-byte. Enabled by default. See [CloudFlare's blog](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency) for more information. - -**enable-underscores-in-headers:** Enables underscores in header names. This is disabled by default. - -**enable-vts-status:** Allows the replacement of the default status page with a third party module named [nginx-module-vts](https://github.com/vozlt/nginx-module-vts). - -**error-log-level:** Configures the logging level of errors. Log levels above are listed in the order of increasing severity. -http://nginx.org/en/docs/ngx_core_module.html#error_log - -**gzip-types:** Sets the MIME types in addition to "text/html" to compress. The special value "\*" matches any MIME type. -Responses with the "text/html" type are always compressed if `use-gzip` is enabled. - -**hsts:** Enables or disables the header HSTS in servers running SSL. -HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. -https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security -https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server - -**hsts-include-subdomains:** Enables or disables the use of HSTS in all the subdomains of the server-name. - -**hsts-max-age:** Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. - -**hsts-preload:** Enables or disables the preload attribute in the HSTS feature (if is enabled) - -**ignore-invalid-headers:** set if header fields with invalid names should be ignored. This is 'true' by default. - -**keep-alive:** Sets the time during which a keep-alive client connection will stay open on the server side. -The zero value disables keep-alive client connections. -http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout - -**load-balance:** Sets the algorithm to use for load balancing. The value can either be round_robin to -use the default round robin loadbalancer, least_conn to use the least connected method, or -ip_hash to use a hash of the server for routing. The default is least_conn. -http://nginx.org/en/docs/http/load_balancing.html. - -**log-format-upstream:** Sets the nginx [log format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format). - -Example for json output: - -``` -log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", - "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$request_id", "remote_user": - "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status": - $status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", - "request_query": "$args", "request_length": $request_length, "duration": $request_time, - "method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": - "$http_user_agent" }' - ``` - -**log-format-stream:** Sets the nginx [stream format](https://nginx.org/en/docs/stream/ngx_stream_log_module.html#log_format). - -**max-worker-connections:** Sets the maximum number of simultaneous connections that can be opened by each [worker process](http://nginx.org/en/docs/ngx_core_module.html#worker_connections). - -**proxy-buffer-size:** Sets the size of the buffer used for [reading the first part of the response](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) received from the proxied server. This part usually contains a small response header. - -**proxy-connect-timeout:** Sets the timeout for [establishing a connection with a proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds. - -**proxy-cookie-domain:** Sets a text that [should be changed in the domain attribute](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain) of the “Set-Cookie” header fields of a proxied server response. - -**proxy-cookie-path:** Sets a text that [should be changed in the path attribute](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path) of the “Set-Cookie” header fields of a proxied server response. - -**proxy-read-timeout:** Sets the timeout in seconds for [reading a response from the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response. - -**proxy-send-timeout:** Sets the timeout in seconds for [transmitting a request to the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request. - -**proxy-next-upstream:** Specifies in [which cases](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream) a request should be passed to the next server. - -**proxy-request-buffering:** Enables or disables [buffering of a client request body](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering). - -**retry-non-idempotent:** Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. - -The previous behavior can be restored using the value "true". - -**server-name-hash-bucket-size:** Sets the size of the bucket for the server names hash tables. -http://nginx.org/en/docs/hash.html -http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size - -**server-name-hash-max-size:** Sets the maximum size of the [server names hash tables](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_max_size) used in server names, map directive’s values, MIME types, names of request header strings, etc. -http://nginx.org/en/docs/hash.html - -**proxy-headers-hash-bucket-size:** Sets the size of the bucket for the proxy headers hash tables. -http://nginx.org/en/docs/hash.html -https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size - -**proxy-headers-hash-max-size:** Sets the maximum size of the proxy headers hash tables. -http://nginx.org/en/docs/hash.html -https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size - -**server-tokens:** Send NGINX Server header in responses and display NGINX version in error pages. Enabled by default. - -**map-hash-bucket-size:** Sets the bucket size for the [map variables hash tables](http://nginx.org/en/docs/http/ngx_http_map_module.html#map_hash_bucket_size). The details of setting up hash tables are provided in a separate [document](http://nginx.org/en/docs/hash.html). - -**ssl-buffer-size:** Sets the size of the [SSL buffer](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data. -The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). -https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/ - -**ssl-ciphers:** Sets the [ciphers](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library. - -The default cipher list is: - `ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256`. - -The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. -The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server_Side_TLS#Forward_Secrecy). - -Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/). - -**ssl-dh-param:** Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy". -https://www.openssl.org/docs/manmaster/apps/dhparam.html -https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam -http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam - -**ssl-protocols:** Sets the [SSL protocols](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols) to use. -The default is: `TLSv1.2`. - -TLSv1 is enabled to allow old clients like: -- [IE 8-10 / Win 7](https://www.ssllabs.com/ssltest/viewClient.html?name=IE&version=8-10&platform=Win%207&key=113) -- [Java 7u25](https://www.ssllabs.com/ssltest/viewClient.html?name=Java&version=7u25&key=26) - -If you don't need to support these clients please remove `TLSv1` to improve security. - -Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`. - -**ssl-redirect:** Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule) -Default is "true". - -**ssl-session-cache:** Enables or disables the use of shared [SSL cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) among worker processes. - -**ssl-session-cache-size:** Sets the size of the [SSL shared session cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes. - -**ssl-session-tickets:** Enables or disables session resumption through [TLS session tickets](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets). - -**ssl-session-ticket-key:** sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. -http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets -By default, a randomly generated key is used. -To create a ticket: `openssl rand 80 | base64 -w0` - -**ssl-session-timeout:** Sets the time during which a client may [reuse the session](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache. - -**upstream-max-fails:** Sets the number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that should happen in the duration set by the `fail_timeout` parameter to consider the server unavailable. - -**upstream-fail-timeout:** Sets the time during which the specified number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) should happen to consider the server unavailable. - -**use-gzip:** Enables or disables compression of HTTP responses using the ["gzip" module](http://nginx.org/en/docs/http/ngx_http_gzip_module.html) -The default mime type list to compress is: `application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`. - -**use-http2:** Enables or disables [HTTP/2](http://nginx.org/en/docs/http/ngx_http_v2_module.html) support in secure connections. - -**use-proxy-protocol:** Enables or disables the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB). - -**whitelist-source-range:** Sets the default whitelisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule. See [ngx_http_access_module](http://nginx.org/en/docs/http/ngx_http_access_module.html). - -**worker-processes:** Sets the number of [worker processes](http://nginx.org/en/docs/ngx_core_module.html#worker_processes). The default of "auto" means number of available CPU cores. - -**worker-shutdown-timeout:** Sets a timeout for Nginx to [wait for worker to gracefully shutdown](http://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout). The default is "10s". - -**limit-conn-zone-variable:** Sets parameters for a shared memory zone that will keep states for various keys of [limit_conn_zone](http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn_zone). The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses. - -**proxy-set-headers:** Sets custom headers from a configmap before sending traffic to backends. See [example](https://github.com/kubernetes/ingress/tree/master/examples/customization/custom-headers/nginx) - -**add-headers:** Sets custom headers from a configmap before sending traffic to the client. See `proxy-set-headers` [example](https://github.com/kubernetes/ingress/tree/master/examples/customization/custom-headers/nginx) - -**bind-address:** Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop. - -**enable-opentracing:** enables the nginx Opentracing extension https://github.com/rnburn/nginx-opentracing -Default is "false" - -**zipkin-collector-host:** specifies the host to use when uploading traces. It must be a valid URL - -**zipkin-collector-port:** specifies the port to use when uploading traces -Default: 9411 - -**zipkin-service-name:** specifies the service name to use for any traces created -Default: nginx - -**http-snippet:** adds custom configuration to the http section of the nginx configuration -Default: "" - -**server-snippet:** adds custom configuration to all the servers in the nginx configuration -Default: "" - -**location-snippet:** adds custom configuration to all the locations in the nginx configuration -Default: "" - - -### Default configuration options - -The following table shows the options, the default value and a description. - -|name |default| -|---------------------------|------| -|body-size|1m| -|custom-http-errors|" "| -|enable-dynamic-tls-records|"true"| -|enable-sticky-sessions|"false"| -|enable-underscores-in-headers|"false"| -|enable-vts-status|"false"| -|error-log-level|notice| -|gzip-types|see use-gzip description above| -|hsts|"true"| -|hsts-include-subdomains|"true"| -|hsts-max-age|"15724800"| -|hsts-preload|"false"| -|ignore-invalid-headers|"true"| -|keep-alive|"75"| -|log-format-stream|[$time_local] $protocol $status $bytes_sent $bytes_received $session_time| -|log-format-upstream|[$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status| -|map-hash-bucket-size|"64"| -|max-worker-connections|"16384"| -|proxy-body-size|same as body-size| -|proxy-buffer-size|"4k"| -|proxy-request-buffering|"on"| -|proxy-connect-timeout|"5"| -|proxy-cookie-domain|"off"| -|proxy-cookie-path|"off"| -|proxy-read-timeout|"60"| -|proxy-real-ip-cidr|0.0.0.0/0| -|proxy-send-timeout|"60"| -|retry-non-idempotent|"false"| -|server-name-hash-bucket-size|"64"| -|server-name-hash-max-size|"512"| -|server-tokens|"true"| -|ssl-buffer-size|4k| -|ssl-ciphers|| -|ssl-dh-param|value from openssl| -|ssl-protocols|TLSv1 TLSv1.1 TLSv1.2| -|ssl-session-cache|"true"| -|ssl-session-cache-size|10m| -|ssl-session-tickets|"true"| -|ssl-session-timeout|10m| -|use-gzip|"true"| -|use-http2|"true"| -|upstream-keepalive-connections|"0" (disabled)| -|variables-hash-bucket-size|64| -|variables-hash-max-size|2048| -|vts-status-zone-size|10m| -|vts-default-filter-key|$geoip_country_code country::*| -|whitelist-source-range|permit all| -|worker-processes|number of CPUs| -|limit-conn-zone-variable|$binary_remote_addr| -|bind-address|| - -### Websockets - -Support for websockets is provided by NGINX out of the box. No special configuration required. - -The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`. The default value of this settings is `60 seconds`. -A more adequate value to support websockets is a value higher than one hour (`3600`). - -### Optimizing TLS Time To First Byte (TTTFB) - -NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (NGINX default is `16k`). - -### Retries in non-idempotent methods - -Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. -The previous behavior can be restored using `retry-non-idempotent=true` in the configuration ConfigMap. - -### Custom max body size -For NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client_max_body_size`](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size). - -To configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the NGINX ConfigMap. -To use custom values in an Ingress rule define these annotation: - -``` -ingress.kubernetes.io/proxy-body-size: 8m -``` diff --git a/deploy/README.md b/deploy/README.md new file mode 100644 index 0000000000..32bd01af52 --- /dev/null +++ b/deploy/README.md @@ -0,0 +1,187 @@ +# Installation Guide + +## Contents + +- [Mandatory commands](#mandatory-commands) +- [Install without RBAC roles](#install-without-rbac-roles) +- [Install with RBAC roles](#install-with-rbac-roles) +- [Custom Provider](#custom-provider) + - [minikube](#minikube) + - [AWS](#aws) + - [GCE - GKE](#gce-gke) + - [Azure](#azure) + - [Baremetal](#baremetal) +- [Using Helm](#using-helm) +- [Verify installation](#verify-installation) +- [Detect installed version](#detect-installed-version) + +## Mandatory commands + +```console +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \ + | kubectl apply -f - +``` + +## Install without RBAC roles + +```console +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml \ + | kubectl apply -f - +``` + +## Install with RBAC roles + +Please check the [RBAC](rbac.md) document. + +```console +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \ + | kubectl apply -f - +``` + +## Custom Service provider + +There are cloud provider specific yaml files + +### minikube + +```console +minikube addons enable ingress +``` + +### AWS + +In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`. +This setup requires to choose in wich layer (L4 or L7) we want to configure the ELB: + +- [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer): use TCP as the listener protocol for ports 80 and 443. +- [Layer 7](https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_Layer): use HTTP as the listener protocol for port 80 and terminate TLS in the ELB + +For L4: + +```console +kubectl apply -f provider/aws/service-l4.yaml +kubectl apply -f provider/aws/patch-configmap-l4.yaml +``` + +For L7: + +Change line of the file `provider/aws/service-l7.yaml` replacing the dummy id with a valid one `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"` +Then execute: + +```console +kubectl apply -f provider/aws/service-l7.yaml +kubectl apply -f provider/aws/patch-configmap-l7.yaml +``` + +This example creates an ELB with just two listeners, one in port 80 and another in port 443 + +![Listeners](../docs/images/listener.png) + +If the ingress controller uses RBAC run: + +```console +kubectl apply -f provider/patch-service-with-rbac.yaml +``` + +If not run: + +```console +kubectl apply -f provider/patch-service-without-rbac.yaml +``` + +### GCE - GKE + +```console +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/gce-gke/service.yaml \ + | kubectl apply -f - +``` + +If the ingress controller uses RBAC run: + +```console +kubectl apply -f provider/patch-service-with-rbac.yaml +``` + +If not run: + +```console +kubectl apply -f provider/patch-service-without-rbac.yaml +``` + +**Important Note:** proxy protocol is not supported in GCE/GKE + +### Azure + +```console +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/azure/service.yaml \ + | kubectl apply -f - +``` + +If the ingress controller uses RBAC run: + +```console +kubectl apply -f provider/patch-service-with-rbac.yaml +``` + +If not run: + +```console +kubectl apply -f provider/patch-service-without-rbac.yaml +``` + +**Important Note:** proxy protocol is not supported in GCE/GKE + +### Baremetal + +Using [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport): + +```console +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml \ + | kubectl apply -f - +``` + +## Using Helm + +NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [stable/nginx](https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress) from the official charts repository. +To install the chart with the release name `my-nginx`: + +```console +helm install stable/nginx-ingress --name my-nginx +``` + +## Verify installation + +To check if the ingress controller pods have started, run the following command: + +```console +kubectl get pods --all-namespaces -l app=ingress-nginx --watch +``` + +Once the operator pods are running, you can cancel the above command by typing `Ctrl+C`. + +Now, you are ready to create your first ingress. + +## Detect installed version + +To detect which version of the ingress controller is running, exec into the pod and run `nginx-ingress-controller version` command. + +```console +POD_NAMESPACE=ingress-nginx +POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=ingress-nginx -o jsonpath={.items[0].metadata.name}) +kubectl exec -it $POD_NAME -n $POD_NAMESPACE /nginx-ingress-controller version +``` diff --git a/deploy/configmap.yaml b/deploy/configmap.yaml new file mode 100644 index 0000000000..08e91017ea --- /dev/null +++ b/deploy/configmap.yaml @@ -0,0 +1,7 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx diff --git a/examples/rbac/default-backend.yml b/deploy/default-backend.yaml similarity index 84% rename from examples/rbac/default-backend.yml rename to deploy/default-backend.yaml index 31cabfc4d0..0f752a20f8 100644 --- a/examples/rbac/default-backend.yml +++ b/deploy/default-backend.yaml @@ -3,14 +3,14 @@ kind: Deployment metadata: name: default-http-backend labels: - k8s-app: default-http-backend - namespace: default + app: default-http-backend + namespace: ingress-nginx spec: replicas: 1 template: metadata: labels: - k8s-app: default-http-backend + app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: @@ -36,16 +36,17 @@ spec: cpu: 10m memory: 20Mi --- + apiVersion: v1 kind: Service metadata: name: default-http-backend - namespace: default + namespace: ingress-nginx labels: - k8s-app: default-http-backend + app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: - k8s-app: default-http-backend + app: default-http-backend diff --git a/deploy/namespace.yaml b/deploy/namespace.yaml new file mode 100644 index 0000000000..6878f0be88 --- /dev/null +++ b/deploy/namespace.yaml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: ingress-nginx diff --git a/deploy/provider/aws/patch-configmap-l4.yaml b/deploy/provider/aws/patch-configmap-l4.yaml new file mode 100644 index 0000000000..18805a5354 --- /dev/null +++ b/deploy/provider/aws/patch-configmap-l4.yaml @@ -0,0 +1,9 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +data: + use-proxy-protocol: "true" diff --git a/deploy/provider/aws/patch-configmap-l7.yaml b/deploy/provider/aws/patch-configmap-l7.yaml new file mode 100644 index 0000000000..394f3962c9 --- /dev/null +++ b/deploy/provider/aws/patch-configmap-l7.yaml @@ -0,0 +1,9 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +data: + use-proxy-protocol: "false" diff --git a/deploy/provider/aws/service-l4.yaml b/deploy/provider/aws/service-l4.yaml new file mode 100644 index 0000000000..b8de049d62 --- /dev/null +++ b/deploy/provider/aws/service-l4.yaml @@ -0,0 +1,20 @@ +kind: Service +apiVersion: v1 +metadata: + name: ingress-nginx + namespace: ingress-nginx + labels: + app: ingress-nginx + annotations: + service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' +spec: + type: LoadBalancer + selector: + app: ingress-nginx + ports: + - name: http + port: 80 + targetPort: http + - name: https + port: 443 + targetPort: https diff --git a/deploy/provider/aws/service-l7.yaml b/deploy/provider/aws/service-l7.yaml new file mode 100644 index 0000000000..a1bb0b3075 --- /dev/null +++ b/deploy/provider/aws/service-l7.yaml @@ -0,0 +1,25 @@ +kind: Service +apiVersion: v1 +metadata: + name: ingress-nginx + namespace: ingress-nginx + labels: + app: ingress-nginx + annotations: + # replace with the correct value of the generated certifcate in the AWS console + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX" + # the backend instances are HTTP + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" + # Map port 443 + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" +spec: + type: LoadBalancer + selector: + app: ingress-nginx + ports: + - name: http + port: 80 + targetPort: http + - name: https + port: 443 + targetPort: http diff --git a/deploy/provider/azure/service.yaml b/deploy/provider/azure/service.yaml new file mode 100644 index 0000000000..0af8b11f4e --- /dev/null +++ b/deploy/provider/azure/service.yaml @@ -0,0 +1,19 @@ +kind: Service +apiVersion: v1 +metadata: + name: ingress-nginx + namespace: ingress-nginx + labels: + app: ingress-nginx +spec: + externalTrafficPolicy: Local + type: LoadBalancer + selector: + app: ingress-nginx + ports: + - name: http + port: 80 + targetPort: http + - name: https + port: 443 + targetPort: http diff --git a/examples/rbac/nginx-ingress-controller-service.yml b/deploy/provider/baremetal/service-nodeport.yml similarity index 54% rename from examples/rbac/nginx-ingress-controller-service.yml rename to deploy/provider/baremetal/service-nodeport.yml index e40c69b578..a00f2453bd 100644 --- a/examples/rbac/nginx-ingress-controller-service.yml +++ b/deploy/provider/baremetal/service-nodeport.yml @@ -1,21 +1,18 @@ apiVersion: v1 kind: Service metadata: - name: nginx-ingress - namespace: nginx-ingress + name: ingress-nginx + namespace: ingress-nginx spec: -# Can also use LoadBalancer type type: NodePort ports: - name: http - port: 8080 - nodePort: 30080 + port: 80 targetPort: 80 protocol: TCP - name: https port: 443 - nodePort: 30443 targetPort: 443 protocol: TCP selector: - k8s-app: nginx-ingress-lb + app: ingress-nginx diff --git a/deploy/provider/gce-gke/service.yaml b/deploy/provider/gce-gke/service.yaml new file mode 100644 index 0000000000..0af8b11f4e --- /dev/null +++ b/deploy/provider/gce-gke/service.yaml @@ -0,0 +1,19 @@ +kind: Service +apiVersion: v1 +metadata: + name: ingress-nginx + namespace: ingress-nginx + labels: + app: ingress-nginx +spec: + externalTrafficPolicy: Local + type: LoadBalancer + selector: + app: ingress-nginx + ports: + - name: http + port: 80 + targetPort: http + - name: https + port: 443 + targetPort: http diff --git a/deploy/provider/patch-service-with-rbac.yaml b/deploy/provider/patch-service-with-rbac.yaml new file mode 100644 index 0000000000..96c3f362ca --- /dev/null +++ b/deploy/provider/patch-service-with-rbac.yaml @@ -0,0 +1,40 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: nginx-ingress-controller + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: ingress-nginx + template: + metadata: + labels: + app: ingress-nginx + spec: + serviceAccountName: nginx-ingress-serviceaccount + containers: + - name: nginx-ingress-controller + image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 + args: + - /nginx-ingress-controller + - --default-backend-service=$(POD_NAMESPACE)/default-http-backend + - --configmap=$(POD_NAMESPACE)/nginx-configuration + - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services + - --udp-services-configmap=$(POD_NAMESPACE)/udp-services + - --publish-service=$(POD_NAMESPACE)/ingress-nginx + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 diff --git a/deploy/provider/patch-service-without-rbac.yaml b/deploy/provider/patch-service-without-rbac.yaml new file mode 100644 index 0000000000..dfcd0b00b6 --- /dev/null +++ b/deploy/provider/patch-service-without-rbac.yaml @@ -0,0 +1,39 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: nginx-ingress-controller + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: ingress-nginx + template: + metadata: + labels: + app: ingress-nginx + spec: + containers: + - name: nginx-ingress-controller + image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 + args: + - /nginx-ingress-controller + - --default-backend-service=$(POD_NAMESPACE)/default-http-backend + - --configmap=$(POD_NAMESPACE)/nginx-configuration + - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services + - --udp-services-configmap=$(POD_NAMESPACE)/udp-services + - --publish-service=$(POD_NAMESPACE)/ingress-nginx + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 diff --git a/examples/rbac/README.md b/deploy/rbac.md similarity index 56% rename from examples/rbac/README.md rename to deploy/rbac.md index 192fa6dd56..718269f363 100644 --- a/examples/rbac/README.md +++ b/deploy/rbac.md @@ -1,23 +1,19 @@ -# Role Based Access Control - -This example demonstrates how to apply an nginx ingress controller with role based access control +# Role Based Access Control - RBAC ## Overview -This example applies to nginx-ingress-controllers being deployed in an -environment with RBAC enabled. +This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: -1. `ClusterRole` - permissions assigned to a role that apply to an entire cluster -2. `ClusterRoleBinding` - binding a ClusterRole to a specific account -3. `Role` - permissions assigned to a role that apply to a specific namespace -4. `RoleBinding` - binding a Role to a specific account +1. `ClusterRole` - permissions assigned to a role that apply to an entire cluster +2. `ClusterRoleBinding` - binding a ClusterRole to a specific account +3. `Role` - permissions assigned to a role that apply to a specific namespace +4. `RoleBinding` - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a `ServiceAccount`. That `ServiceAccount` should be -bound to the `Role`s and `ClusterRole`s defined for the -nginx-ingress-controller. +bound to the `Role`s and `ClusterRole`s defined for the nginx-ingress-controller. ## Service Accounts created in this example @@ -27,8 +23,7 @@ One ServiceAccount is created in this example, `nginx-ingress-serviceaccount`. There are two sets of permissions defined in this example. Cluster-wide permissions defined by the `ClusterRole` named `nginx-ingress-clusterrole`, and -namespace specific permissions defined by the `Role` named -`nginx-ingress-role`. +namespace specific permissions defined by the `Role` named `nginx-ingress-role`. ### Cluster Permissions @@ -76,41 +71,6 @@ nginx-ingress-controller. The ServiceAccount `nginx-ingress-serviceaccount` is bound to the Role `nginx-ingress-role` and the ClusterRole `nginx-ingress-clusterrole`. -## Namespace created in this example - -The `Namespace` named `nginx-ingress` is defined in this example. The -namespace name can be changed arbitrarily as long as all of the references -change as well. - - -## Usage - -1. Create the `Namespace`, `Service Account`, `ClusterRole`, `Role`, -`ClusterRoleBinding`, and `RoleBinding`. - -```sh -kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml -``` - -2. Create default backend -```sh -kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/default-backend.yml -``` - -3. Create the nginx-ingress-controller - -For this example to work, the Service must be in the nginx-ingress namespace: - -```sh -kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller.yml -``` - The serviceAccountName associated with the containers in the deployment must -match the serviceAccount from nginx-ingress-controller-rbac.yml The namespace -references in the Deployment metadata, container arguments, and POD_NAMESPACE -should be in the nginx-ingress namespace. - -4. Create ingress service -```sh -kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-service.yml -``` +match the serviceAccount. The namespace references in the Deployment metadata, +container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace. diff --git a/examples/rbac/nginx-ingress-controller-rbac.yml b/deploy/rbac.yaml similarity index 90% rename from examples/rbac/nginx-ingress-controller-rbac.yml rename to deploy/rbac.yaml index 8bd611bb62..301853216b 100644 --- a/examples/rbac/nginx-ingress-controller-rbac.yml +++ b/deploy/rbac.yaml @@ -1,14 +1,11 @@ apiVersion: v1 -kind: Namespace -metadata: - name: nginx-ingress ---- -apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount - namespace: nginx-ingress + namespace: ingress-nginx + --- + apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: @@ -60,12 +57,14 @@ rules: - ingresses/status verbs: - update + --- + apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role - namespace: nginx-ingress + namespace: ingress-nginx rules: - apiGroups: - "" @@ -101,14 +100,14 @@ rules: - endpoints verbs: - get - - create - - update + --- + apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding - namespace: nginx-ingress + namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role @@ -116,8 +115,10 @@ roleRef: subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount - namespace: nginx-ingress + namespace: ingress-nginx + --- + apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: @@ -129,4 +130,4 @@ roleRef: subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount - namespace: nginx-ingress + namespace: ingress-nginx diff --git a/deploy/tcp-services-configmap.yaml b/deploy/tcp-services-configmap.yaml new file mode 100644 index 0000000000..a963085d3e --- /dev/null +++ b/deploy/tcp-services-configmap.yaml @@ -0,0 +1,5 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: tcp-services + namespace: ingress-nginx diff --git a/deploy/udp-services-configmap.yaml b/deploy/udp-services-configmap.yaml new file mode 100644 index 0000000000..1870931a20 --- /dev/null +++ b/deploy/udp-services-configmap.yaml @@ -0,0 +1,5 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: udp-services + namespace: ingress-nginx diff --git a/deploy/with-rbac.yaml b/deploy/with-rbac.yaml new file mode 100644 index 0000000000..91855cc9de --- /dev/null +++ b/deploy/with-rbac.yaml @@ -0,0 +1,39 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: nginx-ingress-controller + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: ingress-nginx + template: + metadata: + labels: + app: ingress-nginx + spec: + serviceAccountName: nginx-ingress-serviceaccount + containers: + - name: nginx-ingress-controller + image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 + args: + - /nginx-ingress-controller + - --default-backend-service=$(POD_NAMESPACE)/default-http-backend + - --configmap=$(POD_NAMESPACE)/nginx-configuration + - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services + - --udp-services-configmap=$(POD_NAMESPACE)/udp-services + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 diff --git a/deploy/without-rbac.yaml b/deploy/without-rbac.yaml new file mode 100644 index 0000000000..d8be5da470 --- /dev/null +++ b/deploy/without-rbac.yaml @@ -0,0 +1,38 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: nginx-ingress-controller + namespace: ingress-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: ingress-nginx + template: + metadata: + labels: + app: ingress-nginx + spec: + containers: + - name: nginx-ingress-controller + image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 + args: + - /nginx-ingress-controller + - --default-backend-service=$(POD_NAMESPACE)/default-http-backend + - --configmap=$(POD_NAMESPACE)/nginx-configuration + - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services + - --udp-services-configmap=$(POD_NAMESPACE)/udp-services + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + ports: + - name: http + containerPort: 80 + - name: https + containerPort: 443 diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index f630209e29..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Ingress Documentation and Examples - -This directory contains documentation. - -## File naming convention - -Try to create a README file in every directory containing documentation and index -out from there, that's what readers will notice first. Use lower case for other -file names unless you have a reason to draw someone's attention to it. -Avoid CamelCase. - -Rationale: - -* Files that are common to all controllers, or heavily index other files, are -named using ALL CAPS. This is done to indicate to the user that they should -visit these files first. Examples include PREREQUISITES and README. - -* Files specific to a controller, or files that contain information about -various controllers, are named using all lower case. Examples include -configuration and catalog files. - diff --git a/docs/admin.md b/docs/admin.md deleted file mode 100644 index c40247bd9e..0000000000 --- a/docs/admin.md +++ /dev/null @@ -1,55 +0,0 @@ -# Ingress Admin Guide - -This is a guide to the different deployment styles of an Ingress controller. - -## Vanillla deployments - -__GCP__: On GCE/GKE, the Ingress controller runs on the -master. If you wish to stop this controller and run another instance on your -nodes instead, you can do so by following this [example](/examples/deployment/gce). - -__Generic__: You can deploy a generic (nginx or haproxy) Ingress controller by simply -running it as a pod in your cluster, as shown in the [examples](/examples/deployment). -Please note that you must specify the `ingress.class` -[annotation](/examples/PREREQUISITES.md#ingress-class) if you're running on a -cloudprovider, or the cloudprovider controller will fight the nginx controller -for the Ingress. - -__AWS__: Until we have an AWS ALB Ingress controller, you can deploy the nginx -Ingress controller behind an ELB on AWS, as shows in the [next section](#stacked-deployments). - -## Stacked deployments - -__Behind a LoadBalancer Service__: You can deploy a generic controller behind a -Service of `Type=LoadBalancer`, by following this [example](/examples/static-ip/nginx#acquiring-an-ip). -More specifically, first create a LoadBalancer Service that selects the generic -controller pods, then start the generic controller with the `--publish-service` -flag. - - -__Behind another Ingress__: Sometimes it is desirable to deploy a stack of -Ingresses, like the GCE Ingress -> nginx Ingress -> application. You might -want to do this because the GCE HTTP lb offers some features that the GCE -network LB does not, like a global static IP or CDN, but doesn't offer all the -features of nginx, like url rewriting or redirects. - -TODO: Write an example - -## Daemonset - -Neither a single pod nor bank of generic controllers scale with the cluster size. -If you create a daemonset of generic Ingress controllers, every new node -automatically gets an instance of the controller listening on the specified -ports. - -TODO: Write an example - -## Intra-cluster Ingress - -Since generic Ingress controllers run in pods, you can deploy them as intra-cluster -proxies by just not exposing them on a `hostPort` and putting them behind a -Service of `Type=ClusterIP`. - -TODO: Write an example - - diff --git a/docs/dev/README.md b/docs/dev/README.md deleted file mode 100644 index 968ffc3dad..0000000000 --- a/docs/dev/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# Ingress Development Guide - -This directory is intended to be the canonical source of truth for things like -writing and hacking on Ingress controllers. If you find a requirement that this -doc does not capture, please submit an issue on github. If you find other docs -with references to requirements that are not simply links to this doc, please -submit an issue. - -This document is intended to be relative to the branch in which it is found. -It is guaranteed that requirements will change over time for the development -branch, but release branches of Kubernetes should not change. - -## Navigation - -* [Build, test, release](getting-started.md) an existing controller -* [Setup a cluster](setup-cluster.md) to hack at an existing controller -* [Write your own](custom-controller.md) controller - diff --git a/docs/dev/custom-controller.md b/docs/dev/custom-controller.md deleted file mode 100644 index e3d7c94c2f..0000000000 --- a/docs/dev/custom-controller.md +++ /dev/null @@ -1,4 +0,0 @@ -# Writing Ingress controllers - -This doc outlines the basic steps needed to write an Ingress controller. -If you want the tl;dr version, skip straight to the [example](/examples/custom-controller). diff --git a/docs/dev/getting-started.md b/docs/dev/getting-started.md deleted file mode 100644 index 6c01d1170d..0000000000 --- a/docs/dev/getting-started.md +++ /dev/null @@ -1,141 +0,0 @@ -# Getting Started - -This document explains how to get started with developing for Kubernetes Ingress. -It includes how to build, test, and release ingress controllers. - -## Dependencies - -The build uses dependencies in the `ingress/vendor` directory, which -must be installed before building a binary/image. Occasionally, you -might need to update the dependencies. - -This guide requires you to install the [godep](https://github.com/tools/godep) dependency -tool. - -Check the version of `godep` you are using and make sure it is up to date. -```console -$ godep version -godep v74 (linux/amd64/go1.6.1) -``` - -If you have an older version of `godep`, you can update it as follows: -```console -$ cd $GOPATH/src/ingress -$ go get github.com/tools/godep -$ cd $GOPATH/src/github.com/tools/godep -$ go build -o godep *.go -``` - -This will automatically save the dependencies to the `vendor/` directory. -```console -$ cd $GOPATH/src/ingress -$ godep save ./... -``` - -In general, you can follow [this guide](https://github.com/kubernetes/community/blob/master/contributors/devel/godep.md#using-godep-to-manage-dependencies) to update dependencies. -To update a particular dependency, eg: Kubernetes: -```console -$ cd $GOPATH/src/k8s.io/ingress -$ godep restore -$ go get -u k8s.io/kubernetes -$ cd $GOPATH/src/k8s.io/kubernetes -$ godep restore -$ cd $GOPATH/src/k8s.io/kubernetes/ingress -$ rm -rf Godeps -$ godep save ./... -$ git [add/remove] as needed -$ git commit -``` - -## Building - -All ingress controllers are built through a Makefile. Depending on your -requirements you can build a raw server binary, a local container image, -or push an image to a remote repository. - -In order to use your local Docker, you may need to set the following environment variables: -```console -# "gcloud docker" (default) or "docker" -$ export DOCKER= - -# "gcr.io/google_containers" (default), "index.docker.io", or your own registry -$ export REGISTRY= -``` -To find the registry simply run: `docker system info | grep Registry` - -### Nginx Controller - -Build a raw server binary -```console -$ make controllers -``` - -[TODO](https://github.com/kubernetes/ingress/issues/387): add more specific instructions needed for raw server binary. - -Build a local container image -```console -$ make docker-build TAG= PREFIX=$USER/ingress-controller -``` - -Push the container image to a remote repository -```console -$ make docker-push TAG= PREFIX=$USER/ingress-controller -``` - -### GCE Controller - -[TODO](https://github.com/kubernetes/ingress/issues/387): add instructions on building gce controller. - -## Deploying - -There are several ways to deploy the ingress controller onto a cluster. If you don't have a cluster start by -creating one [here](setup-cluster.md). - -* [nginx controller](../../examples/deployment/nginx/README.md) -* [gce controller](../../examples/deployment/gce/README.md) - -## Testing - -To run unit-tests, enter each directory in `controllers/` -```console -$ cd $GOPATH/src/k8s.io/ingress/controllers/ -$ go test ./... -``` - -If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. -```console -$ cd $GOPATH/src/k8s.io/kubernetes -$ ./hack/ginkgo-e2e.sh --ginkgo.focus=Ingress.* --delete-namespace-on-failure=false -``` - -See also [related FAQs](../faq#how-are-the-ingress-controllers-tested). - -[TODO](https://github.com/kubernetes/ingress/issues/5): add instructions on running integration tests, or e2e against -local-up/minikube. - -## Releasing - -All Makefiles will produce a release binary, as shown above. To publish this -to a wider Kubernetes user base, push the image to a container registry, like -[gcr.io](https://cloud.google.com/container-registry/). All release images are hosted under `gcr.io/google_containers` and -tagged according to a [semver](http://semver.org/) scheme. - -An example release might look like: -``` -$ make push TAG=0.8.0 PREFIX=gcr.io/google_containers/glbc -``` - -Please follow these guidelines to cut a release: - -* Update the [release](https://help.github.com/articles/creating-releases/) -page with a short description of the major changes that correspond to a given -image tag. -* Cut a release branch, if appropriate. Release branches follow the format of -`controller-release-version`. Typically, pre-releases are cut from HEAD. -All major feature work is done in HEAD. Specific bug fixes are -cherry-picked into a release branch. -* If you're not confident about the stability of the code, -[tag](https://help.github.com/articles/working-with-tags/) it as alpha or beta. -Typically, a release branch should have stable code. - - diff --git a/docs/dev/setup-cluster.md b/docs/dev/setup-cluster.md deleted file mode 100644 index 06aa9a6301..0000000000 --- a/docs/dev/setup-cluster.md +++ /dev/null @@ -1,115 +0,0 @@ -# Cluster Getting Started - -This doc outlines the steps needed to setup a local dev cluster within which you -can deploy/test an ingress controller. Note that you can also setup the ingress controller -locally. - -## Deploy a Development cluster - -### Single node local cluster - -You can run the nginx ingress controller locally on any node with access to the -internet, and the following dependencies: [docker](https://docs.docker.com/engine/getstarted/step_one/), [etcd](https://github.com/coreos/etcd/releases), [golang](https://golang.org/doc/install), [cfssl](https://github.com/cloudflare/cfssl#installation), [openssl](https://www.openssl.org/), [make](https://www.gnu.org/software/make/), [gcc](https://gcc.gnu.org/), [git](https://git-scm.com/download/linux). - - -Clone the kubernetes repo: -```console -$ cd $GOPATH/src/k8s.io -$ git clone https://github.com/kubernetes/kubernetes.git -``` - -Add yourself to the docker group, if you haven't done so already (or give -local-up-cluster sudo) -``` -$ sudo usermod -aG docker $USER -$ sudo reboot -.. -$ docker ps -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -``` - -**NB: the next step will bring up Kubernetes daemons directly on your dev -machine, no sandbox, iptables rules, routes, loadbalancers, network bridges -etc are created on the host.** - -```console -$ cd $GOPATH/src/k8s.io/kubernetes -$ hack/local-up-cluster.sh -``` - -Check for Ready nodes -```console -$ kubectl get no --context=local -NAME STATUS AGE VERSION -127.0.0.1 Ready 5s v1.6.0-alpha.0.1914+8ccecf93aa6db5-dirty -``` - -### Minikube cluster - -[Minikube](https://github.com/kubernetes/minikube) is a popular way to bring up -a sandboxed local cluster. You will first need to [install](https://github.com/kubernetes/minikube/releases) -the minikube binary, then bring up a cluster -```console -$ minikube start -``` - -Check for Ready nodes -```console -$ kubectl get no -NAME STATUS AGE VERSION -minikube Ready 42m v1.4.6 -``` - -List the existing addons -```console -$ minikube addons list -- addon-manager: enabled -- dashboard: enabled -- kube-dns: enabled -- heapster: disabled -``` - -If this list already contains the ingress controller, you don't need to -redeploy it. If the addon controller is disabled, you can enable it with -```console -$ minikube addons enable ingress -``` - -If the list *does not* contain the ingress controller, you can either update -minikube, or deploy it yourself as shown in the next section. - -You may want to consider [using the VM's docker -daemon](https://github.com/kubernetes/minikube/blob/master/README.md#reusing-the-docker-daemon) -when developing. - -### CoreOS Kubernetes - -[CoreOS Kubernetes](https://github.com/coreos/coreos-kubernetes/) repository has `Vagrantfile` -scripts to easily create a new Kubernetes cluster on VirtualBox, VMware or AWS. - -Follow the CoreOS [doc](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html) -for detailed instructions. - -## Deploy the ingress controller - -You can deploy an ingress controller on the cluster setup in the previous step -[like this](../../examples/deployment). - -## Run against a remote cluster - -If the controller you're interested in using supports a "dry-run" flag, you can -run it on any machine that has `kubectl` access to a remote cluster. Eg: -```console -$ cd $GOPATH/k8s.io/ingress/controllers/gce -$ glbc --help - --running-in-cluster Optional, if this controller is running in a kubernetes cluster, use the - pod secrets for creating a Kubernetes client. (default true) - -$ ./glbc --running-in-cluster=false -I1210 17:49:53.202149 27767 main.go:179] Starting GLBC image: glbc:0.9.2, cluster name -``` - -Note that this is equivalent to running the ingress controller on your local -machine, so if you already have an ingress controller running in the remote -cluster, they will fight for the same ingress. - diff --git a/examples/PREREQUISITES.md b/docs/examples/PREREQUISITES.md similarity index 81% rename from examples/PREREQUISITES.md rename to docs/examples/PREREQUISITES.md index 7c6bf2fd62..94581901b5 100644 --- a/examples/PREREQUISITES.md +++ b/docs/examples/PREREQUISITES.md @@ -2,23 +2,6 @@ Many of the examples in this directory have common prerequisites. -## Deploying a controller - -Unless you're running on a cloudprovider that supports Ingress out of the box -(eg: GCE/GKE), you will need to deploy a controller. You can do so following -[these instructions](/examples/deployment). - -## Firewall rules - -If you're using a generic controller (eg the nginx ingress controller), you -will need to create a firewall rule that targets port 80/443 on the specific VMs -the nginx controller is running on. On cloudproviders, the respective backend -will auto-create firewall rules for your Ingress. - -If you'd like to auto-create firewall rules for an Ingress controller, -you can put it behind a Service of `Type=Loadbalancer` as shown in -[this example](/examples/static-ip/nginx#acquiring-an-ip). - ## TLS certificates Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA @@ -37,6 +20,7 @@ secret "tls-secret" created ``` ## CA Authentication + You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our own CA, and also generate a client certificate. @@ -72,12 +56,13 @@ This will generate two files: A private key (ca.key) and a public key (ca.crt). The ca.crt can be used later in the step of creation of CA authentication secret. ### Generating the client certificate + The following steps generate a client certificate signed by the CA generated above. This client can be used to authenticate in a tls-auth configured ingress. First, we need to generate an 'openssl.cnf' file that will be used while signing the keys: -``` +```console [req] req_extensions = v3_req distinguished_name = req_distinguished_name @@ -103,8 +88,8 @@ $ openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -ou Then, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR). - ### Creating the CA Authentication secret + If you're using the CA Authentication feature, you need to generate a secret containing all the authorized CAs. You must download them from your CA site in PEM format (like the following): @@ -123,7 +108,6 @@ $ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem Then, you've to concatenate them all in only one file, named 'ca.crt' as the following: - ```console $ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt ``` @@ -160,6 +144,7 @@ http-svc 10.0.122.116 80:30301/TCP 1d ``` You can test that the HTTP Service works by exposing it temporarily + ```console $ kubectl patch svc http-svc -p '{"spec":{"type": "LoadBalancer"}}' "http-svc" patched @@ -209,31 +194,3 @@ BODY: $ kubectl patch svc http-svc -p '{"spec":{"type": "NodePort"}}' "http-svc" patched ``` - -## Ingress Class - -If you have multiple Ingress controllers in a single cluster, you can pick one -by specifying the `ingress.class` annotation, eg creating an Ingress with an -annotation like - -```yaml -metadata: - name: foo - annotations: - kubernetes.io/ingress.class: "gce" -``` - -will target the GCE controller, forcing the nginx controller to ignore it, while -an annotation like - -```yaml -metadata: - name: foo - annotations: - kubernetes.io/ingress.class: "nginx" -``` - -will target the nginx controller, forcing the GCE controller to ignore it. - -__Note__: Deploying multiple ingress controller and not specifying the -annotation will result in both controllers fighting to satisfy the Ingress. diff --git a/examples/README.md b/docs/examples/README.md similarity index 100% rename from examples/README.md rename to docs/examples/README.md diff --git a/examples/affinity/cookie/README.md b/docs/examples/affinity/cookie/README.md similarity index 82% rename from examples/affinity/cookie/README.md rename to docs/examples/affinity/cookie/README.md index 51aeec310a..25a0a79c80 100644 --- a/examples/affinity/cookie/README.md +++ b/docs/examples/affinity/cookie/README.md @@ -1,15 +1,6 @@ # Sticky Session -This example demonstrates how to achieve session affinity using cookies - -## Prerequisites - -You will need to make sure you Ingress targets exactly one Ingress -controller by specifying the [ingress.class annotation](/examples/PREREQUISITES.md#ingress-class), -and that you have an ingress controller [running](/examples/deployment) in your cluster. - -You will also need to deploy multiple replicas of your application that show up as endpoints for the Service referenced in the Ingress object, to test session stickyness. -Using a deployment with only one replica doesn't set the 'sticky' cookie. +This example demonstrates how to achieve session affinity using cookies ## Deployment @@ -24,7 +15,7 @@ Session stickyness is achieved through 3 annotations on the Ingress, as shown in You can create the ingress to test this ```console -$ kubectl create -f sticky-ingress.yaml +kubectl create -f ingress.yaml ``` ## Validation diff --git a/examples/affinity/cookie/sticky-ingress.yaml b/docs/examples/affinity/cookie/ingress.yaml similarity index 82% rename from examples/affinity/cookie/sticky-ingress.yaml rename to docs/examples/affinity/cookie/ingress.yaml index 69beea75e6..5e89d580db 100644 --- a/examples/affinity/cookie/sticky-ingress.yaml +++ b/docs/examples/affinity/cookie/ingress.yaml @@ -3,7 +3,6 @@ kind: Ingress metadata: name: nginx-test annotations: - kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/affinity: "cookie" ingress.kubernetes.io/session-cookie-name: "route" ingress.kubernetes.io/session-cookie-hash: "sha1" @@ -14,6 +13,6 @@ spec: http: paths: - backend: - serviceName: nginx-service + serviceName: http-svc servicePort: 80 path: / diff --git a/examples/auth/basic/README.md b/docs/examples/auth/basic/README.md similarity index 96% rename from examples/auth/basic/README.md rename to docs/examples/auth/basic/README.md index fc70bdc11c..a786df2b42 100644 --- a/examples/auth/basic/README.md +++ b/docs/examples/auth/basic/README.md @@ -1,7 +1,8 @@ +# Basic Authentication This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`. -``` +```console $ htpasswd -c auth foo New password: New password: @@ -9,12 +10,12 @@ Re-type new password: Adding password for user foo ``` -``` +```console $ kubectl create secret generic basic-auth --from-file=auth secret "basic-auth" created ``` -``` +```console $ kubectl get secret basic-auth -o yaml apiVersion: v1 data: @@ -26,7 +27,7 @@ metadata: type: Opaque ``` -``` +```console echo " apiVersion: extensions/v1beta1 kind: Ingress @@ -46,7 +47,7 @@ spec: paths: - path: / backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 " | kubectl create -f - ``` diff --git a/docs/examples/auth/client-certs/README.md b/docs/examples/auth/client-certs/README.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/examples/auth/client-certs/nginx-tls-auth.yaml b/docs/examples/auth/client-certs/nginx-tls-auth.yaml similarity index 94% rename from examples/auth/client-certs/nginx-tls-auth.yaml rename to docs/examples/auth/client-certs/nginx-tls-auth.yaml index fe920a0f9f..2527adf440 100644 --- a/examples/auth/client-certs/nginx-tls-auth.yaml +++ b/docs/examples/auth/client-certs/nginx-tls-auth.yaml @@ -7,7 +7,6 @@ metadata: ingress.kubernetes.io/auth-tls-verify-depth: "3" ingress.kubernetes.io/auth-tls-verify-client: "on" auth-tls-error-page: "http://www.mysite.com/error-cert.html" - kubernetes.io/ingress.class: "nginx" name: nginx-test namespace: default spec: diff --git a/examples/auth/external-auth/README.md b/docs/examples/auth/external-auth/README.md similarity index 98% rename from examples/auth/external-auth/README.md rename to docs/examples/auth/external-auth/README.md index db522c1d2f..aa726088a4 100644 --- a/examples/auth/external-auth/README.md +++ b/docs/examples/auth/external-auth/README.md @@ -29,7 +29,7 @@ spec: http: paths: - backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 path: / status: @@ -40,7 +40,8 @@ $ ``` Test 1: no username/password (expect code 401) -``` + +```console $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... diff --git a/examples/auth/external-auth/ingress.yaml b/docs/examples/auth/external-auth/ingress.yaml similarity index 89% rename from examples/auth/external-auth/ingress.yaml rename to docs/examples/auth/external-auth/ingress.yaml index 1cf779ce29..00ba4a4f0a 100644 --- a/examples/auth/external-auth/ingress.yaml +++ b/docs/examples/auth/external-auth/ingress.yaml @@ -10,6 +10,6 @@ spec: http: paths: - backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 path: / \ No newline at end of file diff --git a/docs/examples/customization/configuration-snippets/README.md b/docs/examples/customization/configuration-snippets/README.md new file mode 100644 index 0000000000..0e079c0451 --- /dev/null +++ b/docs/examples/customization/configuration-snippets/README.md @@ -0,0 +1,12 @@ + +## Ingress +The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at [this example](/examples/customization/custom-headers/nginx). + +```console +$ kubectl apply -f ingress.yaml +``` + +## Test + +Check if the contents of the annotation are present in the nginx.conf file using: +`kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf` diff --git a/examples/customization/configuration-snippets/ingress.yaml b/docs/examples/customization/configuration-snippets/ingress.yaml similarity index 90% rename from examples/customization/configuration-snippets/ingress.yaml rename to docs/examples/customization/configuration-snippets/ingress.yaml index e60d75f90f..e705f61096 100644 --- a/examples/customization/configuration-snippets/ingress.yaml +++ b/docs/examples/customization/configuration-snippets/ingress.yaml @@ -3,7 +3,6 @@ kind: Ingress metadata: name: nginx-configuration-snippet annotations: - kubernetes.io/ingress.class: "nginx" ingress.kubernetes.io/configuration-snippet: | more_set_headers "Request-Id: $request_id"; diff --git a/examples/customization/custom-configuration/README.md b/docs/examples/customization/custom-configuration/README.md similarity index 100% rename from examples/customization/custom-configuration/README.md rename to docs/examples/customization/custom-configuration/README.md diff --git a/examples/customization/custom-configuration/nginx-custom-configuration.yaml b/docs/examples/customization/custom-configuration/nginx-custom-configuration.yaml similarity index 100% rename from examples/customization/custom-configuration/nginx-custom-configuration.yaml rename to docs/examples/customization/custom-configuration/nginx-custom-configuration.yaml diff --git a/examples/customization/custom-configuration/nginx-load-balancer-conf.yaml b/docs/examples/customization/custom-configuration/nginx-load-balancer-conf.yaml similarity index 100% rename from examples/customization/custom-configuration/nginx-load-balancer-conf.yaml rename to docs/examples/customization/custom-configuration/nginx-load-balancer-conf.yaml diff --git a/examples/customization/custom-errors/README.md b/docs/examples/customization/custom-errors/README.md similarity index 100% rename from examples/customization/custom-errors/README.md rename to docs/examples/customization/custom-errors/README.md diff --git a/examples/customization/custom-errors/custom-default-backend.yaml b/docs/examples/customization/custom-errors/custom-default-backend.yaml similarity index 100% rename from examples/customization/custom-errors/custom-default-backend.yaml rename to docs/examples/customization/custom-errors/custom-default-backend.yaml diff --git a/examples/customization/custom-errors/rc-custom-errors.yaml b/docs/examples/customization/custom-errors/rc-custom-errors.yaml similarity index 100% rename from examples/customization/custom-errors/rc-custom-errors.yaml rename to docs/examples/customization/custom-errors/rc-custom-errors.yaml diff --git a/examples/customization/custom-headers/README.md b/docs/examples/customization/custom-headers/README.md similarity index 100% rename from examples/customization/custom-headers/README.md rename to docs/examples/customization/custom-headers/README.md diff --git a/examples/customization/custom-headers/custom-headers.yaml b/docs/examples/customization/custom-headers/custom-headers.yaml similarity index 100% rename from examples/customization/custom-headers/custom-headers.yaml rename to docs/examples/customization/custom-headers/custom-headers.yaml diff --git a/examples/customization/configuration-snippets/default-backend.yaml b/docs/examples/customization/custom-headers/default-backend.yaml similarity index 100% rename from examples/customization/configuration-snippets/default-backend.yaml rename to docs/examples/customization/custom-headers/default-backend.yaml diff --git a/examples/customization/configuration-snippets/nginx-ingress-controller.yaml b/docs/examples/customization/custom-headers/nginx-ingress-controller.yaml similarity index 100% rename from examples/customization/configuration-snippets/nginx-ingress-controller.yaml rename to docs/examples/customization/custom-headers/nginx-ingress-controller.yaml diff --git a/examples/customization/custom-headers/nginx-load-balancer-conf.yaml b/docs/examples/customization/custom-headers/nginx-load-balancer-conf.yaml similarity index 100% rename from examples/customization/custom-headers/nginx-load-balancer-conf.yaml rename to docs/examples/customization/custom-headers/nginx-load-balancer-conf.yaml diff --git a/examples/customization/custom-upstream-check/README.md b/docs/examples/customization/custom-upstream-check/README.md similarity index 83% rename from examples/customization/custom-upstream-check/README.md rename to docs/examples/customization/custom-upstream-check/README.md index de81c40ff5..d30259b48b 100644 --- a/examples/customization/custom-upstream-check/README.md +++ b/docs/examples/customization/custom-upstream-check/README.md @@ -1,3 +1,5 @@ +# Custom Upstream server checks + This example shows how is possible to create a custom configuration for a particular upstream associated with an Ingress rule. ``` @@ -5,7 +7,7 @@ echo " apiVersion: extensions/v1beta1 kind: Ingress metadata: - name: echoheaders + name: http-svc annotations: ingress.kubernetes.io/upstream-fail-timeout: "30" spec: @@ -15,14 +17,14 @@ spec: paths: - path: / backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 " | kubectl create -f - ``` Check the annotation is present in the Ingress rule: ``` -kubectl get ingress echoheaders -o yaml +kubectl get ingress http-svc -o yaml ``` Check the NGINX configuration is updated using kubectl or the status page: @@ -33,7 +35,7 @@ $ kubectl exec nginx-ingress-controller-v1ppm cat /etc/nginx/nginx.conf ``` .... - upstream default-echoheaders-x-80 { + upstream default-http-svc-x-80 { least_conn; server 10.2.92.2:8080 max_fails=5 fail_timeout=30; diff --git a/examples/customization/custom-upstream-check/custom-upstream.png b/docs/examples/customization/custom-upstream-check/custom-upstream.png similarity index 100% rename from examples/customization/custom-upstream-check/custom-upstream.png rename to docs/examples/customization/custom-upstream-check/custom-upstream.png diff --git a/examples/customization/custom-vts-metrics-prometheus/README.md b/docs/examples/customization/custom-vts-metrics-prometheus/README.md similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/README.md rename to docs/examples/customization/custom-vts-metrics-prometheus/README.md diff --git a/examples/customization/custom-headers/default-backend.yaml b/docs/examples/customization/custom-vts-metrics-prometheus/default-backend.yaml similarity index 100% rename from examples/customization/custom-headers/default-backend.yaml rename to docs/examples/customization/custom-vts-metrics-prometheus/default-backend.yaml diff --git a/examples/customization/custom-vts-metrics-prometheus/imgs/prometheus-filter-key-path.png b/docs/examples/customization/custom-vts-metrics-prometheus/imgs/prometheus-filter-key-path.png similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/imgs/prometheus-filter-key-path.png rename to docs/examples/customization/custom-vts-metrics-prometheus/imgs/prometheus-filter-key-path.png diff --git a/examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard-filter-key-path.png b/docs/examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard-filter-key-path.png similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard-filter-key-path.png rename to docs/examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard-filter-key-path.png diff --git a/examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard.png b/docs/examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard.png similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard.png rename to docs/examples/customization/custom-vts-metrics-prometheus/imgs/vts-dashboard.png diff --git a/examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller-service.yaml b/docs/examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller-service.yaml similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller-service.yaml rename to docs/examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller-service.yaml diff --git a/examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller.yaml b/docs/examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller.yaml similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller.yaml rename to docs/examples/customization/custom-vts-metrics-prometheus/nginx-ingress-controller.yaml diff --git a/examples/customization/custom-vts-metrics-prometheus/nginx-vts-metrics-conf.yaml b/docs/examples/customization/custom-vts-metrics-prometheus/nginx-vts-metrics-conf.yaml similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/nginx-vts-metrics-conf.yaml rename to docs/examples/customization/custom-vts-metrics-prometheus/nginx-vts-metrics-conf.yaml diff --git a/examples/customization/external-auth-headers/Makefile b/docs/examples/customization/external-auth-headers/Makefile similarity index 100% rename from examples/customization/external-auth-headers/Makefile rename to docs/examples/customization/external-auth-headers/Makefile diff --git a/examples/customization/external-auth-headers/README.md b/docs/examples/customization/external-auth-headers/README.md similarity index 100% rename from examples/customization/external-auth-headers/README.md rename to docs/examples/customization/external-auth-headers/README.md diff --git a/examples/customization/external-auth-headers/authsvc/Dockerfile b/docs/examples/customization/external-auth-headers/authsvc/Dockerfile similarity index 100% rename from examples/customization/external-auth-headers/authsvc/Dockerfile rename to docs/examples/customization/external-auth-headers/authsvc/Dockerfile diff --git a/examples/customization/external-auth-headers/authsvc/authsvc.go b/docs/examples/customization/external-auth-headers/authsvc/authsvc.go similarity index 100% rename from examples/customization/external-auth-headers/authsvc/authsvc.go rename to docs/examples/customization/external-auth-headers/authsvc/authsvc.go diff --git a/examples/customization/external-auth-headers/deploy/auth-service.yaml b/docs/examples/customization/external-auth-headers/deploy/auth-service.yaml similarity index 100% rename from examples/customization/external-auth-headers/deploy/auth-service.yaml rename to docs/examples/customization/external-auth-headers/deploy/auth-service.yaml diff --git a/examples/customization/external-auth-headers/deploy/default-backend.yaml b/docs/examples/customization/external-auth-headers/deploy/default-backend.yaml similarity index 100% rename from examples/customization/external-auth-headers/deploy/default-backend.yaml rename to docs/examples/customization/external-auth-headers/deploy/default-backend.yaml diff --git a/examples/customization/external-auth-headers/deploy/echo-service.yaml b/docs/examples/customization/external-auth-headers/deploy/echo-service.yaml similarity index 100% rename from examples/customization/external-auth-headers/deploy/echo-service.yaml rename to docs/examples/customization/external-auth-headers/deploy/echo-service.yaml diff --git a/examples/customization/external-auth-headers/deploy/nginx-ingress-controller.yaml b/docs/examples/customization/external-auth-headers/deploy/nginx-ingress-controller.yaml similarity index 100% rename from examples/customization/external-auth-headers/deploy/nginx-ingress-controller.yaml rename to docs/examples/customization/external-auth-headers/deploy/nginx-ingress-controller.yaml diff --git a/examples/customization/external-auth-headers/echosvc/Dockerfile b/docs/examples/customization/external-auth-headers/echosvc/Dockerfile similarity index 100% rename from examples/customization/external-auth-headers/echosvc/Dockerfile rename to docs/examples/customization/external-auth-headers/echosvc/Dockerfile diff --git a/examples/customization/external-auth-headers/echosvc/echosvc.go b/docs/examples/customization/external-auth-headers/echosvc/echosvc.go similarity index 100% rename from examples/customization/external-auth-headers/echosvc/echosvc.go rename to docs/examples/customization/external-auth-headers/echosvc/echosvc.go diff --git a/examples/customization/ssl-dh-param/README.md b/docs/examples/customization/ssl-dh-param/README.md similarity index 100% rename from examples/customization/ssl-dh-param/README.md rename to docs/examples/customization/ssl-dh-param/README.md diff --git a/examples/customization/custom-vts-metrics-prometheus/default-backend.yaml b/docs/examples/customization/ssl-dh-param/default-backend.yaml similarity index 100% rename from examples/customization/custom-vts-metrics-prometheus/default-backend.yaml rename to docs/examples/customization/ssl-dh-param/default-backend.yaml diff --git a/examples/customization/custom-headers/nginx-ingress-controller.yaml b/docs/examples/customization/ssl-dh-param/nginx-ingress-controller.yaml similarity index 100% rename from examples/customization/custom-headers/nginx-ingress-controller.yaml rename to docs/examples/customization/ssl-dh-param/nginx-ingress-controller.yaml diff --git a/examples/customization/ssl-dh-param/nginx-load-balancer-conf.yaml b/docs/examples/customization/ssl-dh-param/nginx-load-balancer-conf.yaml similarity index 100% rename from examples/customization/ssl-dh-param/nginx-load-balancer-conf.yaml rename to docs/examples/customization/ssl-dh-param/nginx-load-balancer-conf.yaml diff --git a/examples/customization/ssl-dh-param/ssl-dh-param.yaml b/docs/examples/customization/ssl-dh-param/ssl-dh-param.yaml similarity index 100% rename from examples/customization/ssl-dh-param/ssl-dh-param.yaml rename to docs/examples/customization/ssl-dh-param/ssl-dh-param.yaml diff --git a/examples/external-auth/README.md b/docs/examples/external-auth/README.md similarity index 98% rename from examples/external-auth/README.md rename to docs/examples/external-auth/README.md index 92d8d93b15..21ccbdd3d7 100644 --- a/examples/external-auth/README.md +++ b/docs/examples/external-auth/README.md @@ -1,4 +1,4 @@ -## External Authentication +# External Authentication ### Overview @@ -18,7 +18,7 @@ same endpoint. Sample: -``` +```yaml ... metadata: name: application @@ -33,7 +33,7 @@ metadata: This example will show you how to deploy [`oauth2_proxy`](https://github.com/bitly/oauth2_proxy) into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider -#### Prepare: +#### Prepare 1. Install the kubernetes dashboard @@ -45,14 +45,11 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addon ![Register OAuth2 Application](images/register-oauth-app.png) - - Homepage URL is the FQDN in the Ingress rule, like `https://foo.bar.com` - Authorization callback URL is the same as the base FQDN plus `/oauth2`, like `https://foo.bar.com/oauth2` - ![Register OAuth2 Application](images/register-oauth-app-2.png) - 3. Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values: - OAUTH2_PROXY_CLIENT_ID with the github `` @@ -64,13 +61,13 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addon Replace `__INGRESS_HOST__` with a valid FQDN and `__INGRESS_SECRET__` with a Secret with a valid SSL certificate. 5. Deploy the oauth2 proxy and the ingress rules running: + ```console $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml ``` Test the oauth integration accessing the configured URL, like `https://foo.bar.com` - ![Register OAuth2 Application](images/github-auth.png) ![Github authentication](images/oauth-login.png) diff --git a/examples/external-auth/dashboard-ingress.yaml b/docs/examples/external-auth/dashboard-ingress.yaml similarity index 100% rename from examples/external-auth/dashboard-ingress.yaml rename to docs/examples/external-auth/dashboard-ingress.yaml diff --git a/examples/external-auth/images/dashboard.png b/docs/examples/external-auth/images/dashboard.png similarity index 100% rename from examples/external-auth/images/dashboard.png rename to docs/examples/external-auth/images/dashboard.png diff --git a/examples/external-auth/images/github-auth.png b/docs/examples/external-auth/images/github-auth.png similarity index 100% rename from examples/external-auth/images/github-auth.png rename to docs/examples/external-auth/images/github-auth.png diff --git a/examples/external-auth/images/oauth-login.png b/docs/examples/external-auth/images/oauth-login.png similarity index 100% rename from examples/external-auth/images/oauth-login.png rename to docs/examples/external-auth/images/oauth-login.png diff --git a/examples/external-auth/images/register-oauth-app-2.png b/docs/examples/external-auth/images/register-oauth-app-2.png similarity index 100% rename from examples/external-auth/images/register-oauth-app-2.png rename to docs/examples/external-auth/images/register-oauth-app-2.png diff --git a/examples/external-auth/images/register-oauth-app.png b/docs/examples/external-auth/images/register-oauth-app.png similarity index 100% rename from examples/external-auth/images/register-oauth-app.png rename to docs/examples/external-auth/images/register-oauth-app.png diff --git a/examples/external-auth/oauth2-proxy.yaml b/docs/examples/external-auth/oauth2-proxy.yaml similarity index 100% rename from examples/external-auth/oauth2-proxy.yaml rename to docs/examples/external-auth/oauth2-proxy.yaml diff --git a/examples/echo-header.yaml b/docs/examples/http-svc.yaml similarity index 52% rename from examples/echo-header.yaml rename to docs/examples/http-svc.yaml index a0fa1a4bff..58f3c527e6 100644 --- a/examples/echo-header.yaml +++ b/docs/examples/http-svc.yaml @@ -1,26 +1,28 @@ apiVersion: extensions/v1beta1 kind: Deployment metadata: - name: echoheaders + name: http-svc spec: replicas: 1 template: metadata: labels: - app: echoheaders + app: http-svc spec: containers: - - name: echoheaders + - name: http-svc image: gcr.io/google_containers/echoserver:1.8 ports: - containerPort: 8080 + --- + apiVersion: v1 kind: Service metadata: - name: echoheaders-x + name: http-svc labels: - app: echoheaders-x + app: http-svc spec: ports: - port: 80 @@ -28,19 +30,4 @@ spec: protocol: TCP name: http selector: - app: echoheaders ---- -apiVersion: v1 -kind: Service -metadata: - name: echoheaders-y - labels: - app: echoheaders-y -spec: - ports: - - port: 80 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: echoheaders \ No newline at end of file + app: http-svc diff --git a/examples/multi-tls/README.md b/docs/examples/multi-tls/README.md similarity index 93% rename from examples/multi-tls/README.md rename to docs/examples/multi-tls/README.md index 7eddc42d2f..dceefe9e11 100644 --- a/examples/multi-tls/README.md +++ b/docs/examples/multi-tls/README.md @@ -48,17 +48,17 @@ $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | proxy_http_version 1.1; - proxy_pass http://default-echoheaders-80; + proxy_pass http://default-http-svc-80; } ``` -And you should be able to reach your nginx service or echoheaders service using a hostname switch: +And you should be able to reach your nginx service or http-svc service using a hostname switch: ```console $ kubectl get ing NAME RULE BACKEND ADDRESS AGE foo-tls - 104.154.30.67 13m foo.bar.com - / echoheaders:80 + / http-svc:80 bar.baz.com / nginx:80 diff --git a/examples/multi-tls/multi-tls.yaml b/docs/examples/multi-tls/multi-tls.yaml similarity index 93% rename from examples/multi-tls/multi-tls.yaml rename to docs/examples/multi-tls/multi-tls.yaml index a8446aa62d..b503f620f3 100644 --- a/examples/multi-tls/multi-tls.yaml +++ b/docs/examples/multi-tls/multi-tls.yaml @@ -33,9 +33,9 @@ spec: apiVersion: v1 kind: Service metadata: - name: echoheaders + name: http-svc labels: - app: echoheaders + app: http-svc spec: ports: - port: 80 @@ -43,21 +43,21 @@ spec: protocol: TCP name: http selector: - app: echoheaders + app: http-svc --- apiVersion: v1 kind: ReplicationController metadata: - name: echoheaders + name: http-svc spec: replicas: 1 template: metadata: labels: - app: echoheaders + app: http-svc spec: containers: - - name: echoheaders + - name: http-svc image: gcr.io/google_containers/echoserver:1.8 ports: - containerPort: 8080 @@ -108,7 +108,7 @@ spec: http: paths: - backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 path: / - host: bar.baz.com diff --git a/examples/rewrite/README.md b/docs/examples/rewrite/README.md similarity index 97% rename from examples/rewrite/README.md rename to docs/examples/rewrite/README.md index b3e50a88dc..b214b37c11 100644 --- a/examples/rewrite/README.md +++ b/docs/examples/rewrite/README.md @@ -40,7 +40,7 @@ spec: http: paths: - backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 path: /something " | kubectl create -f - @@ -108,7 +108,7 @@ spec: http: paths: - backend: - serviceName: echoheaders + serviceName: http-svc servicePort: 80 path: / " | kubectl create -f - diff --git a/examples/static-ip/README.md b/docs/examples/static-ip/README.md similarity index 99% rename from examples/static-ip/README.md rename to docs/examples/static-ip/README.md index 67df63a57e..988968956d 100644 --- a/examples/static-ip/README.md +++ b/docs/examples/static-ip/README.md @@ -1,8 +1,6 @@ # Static IPs - -This example demonstrates how to assign a static-ip to an Ingress on through -the Nginx controller. +This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller. ## Prerequisites diff --git a/examples/static-ip/nginx-ingress-controller.yaml b/docs/examples/static-ip/nginx-ingress-controller.yaml similarity index 100% rename from examples/static-ip/nginx-ingress-controller.yaml rename to docs/examples/static-ip/nginx-ingress-controller.yaml diff --git a/examples/static-ip/nginx-ingress.yaml b/docs/examples/static-ip/nginx-ingress.yaml similarity index 80% rename from examples/static-ip/nginx-ingress.yaml rename to docs/examples/static-ip/nginx-ingress.yaml index 6cdd81fc89..1db6ee335b 100644 --- a/examples/static-ip/nginx-ingress.yaml +++ b/docs/examples/static-ip/nginx-ingress.yaml @@ -1,9 +1,7 @@ apiVersion: extensions/v1beta1 kind: Ingress metadata: - name: nginx-ingress - annotations: - kubernetes.io/ingress.class: "nginx" + name: ingress-nginx spec: tls: # This assumes tls-secret exists. diff --git a/examples/static-ip/static-ip-svc.yaml b/docs/examples/static-ip/static-ip-svc.yaml similarity index 100% rename from examples/static-ip/static-ip-svc.yaml rename to docs/examples/static-ip/static-ip-svc.yaml diff --git a/examples/tls-termination/nginx/README.md b/docs/examples/tls-termination/README.md similarity index 75% rename from examples/tls-termination/nginx/README.md rename to docs/examples/tls-termination/README.md index 8ad3ac1424..849cde1158 100644 --- a/examples/tls-termination/nginx/README.md +++ b/docs/examples/tls-termination/README.md @@ -4,19 +4,15 @@ This example demonstrates how to terminate TLS through the nginx Ingress control ## Prerequisites -You need a [TLS cert](/examples/PREREQUISITES.md#tls-certificates) and a [test HTTP service](/examples/PREREQUISITES.md#test-http-service) for this example. -You will also need to make sure you Ingress targets exactly one Ingress -controller by specifying the [ingress.class annotation](/examples/PREREQUISITES.md#ingress-class), -and that you have an ingress controller [running](/examples/deployment) in your cluster. +You need a [TLS cert](../PREREQUISITES.md#tls-certificates) and a [test HTTP service](../PREREQUISITES.md#test-http-service) for this example. ## Deployment -The following command instructs the controller to terminate traffic using -the provided TLS cert, and forward un-encrypted HTTP traffic to the test -HTTP service. +The following command instructs the controller to terminate traffic using the provided +TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service. ```console -$ kubectl create -f nginx-tls-ingress.yaml +kubectl apply -f ingress.yaml ``` ## Validation diff --git a/examples/tls-termination/nginx/nginx-tls-ingress.yaml b/docs/examples/tls-termination/ingress.yaml similarity index 85% rename from examples/tls-termination/nginx/nginx-tls-ingress.yaml rename to docs/examples/tls-termination/ingress.yaml index c73452dd60..9c12a4bc5d 100644 --- a/examples/tls-termination/nginx/nginx-tls-ingress.yaml +++ b/docs/examples/tls-termination/ingress.yaml @@ -2,8 +2,6 @@ apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-test - annotations: - kubernetes.io/ingress.class: "nginx" spec: tls: # This assumes tls-secret exists. @@ -15,4 +13,3 @@ spec: # This assumes http-svc exists and routes to healthy endpoints. serviceName: http-svc servicePort: 80 - diff --git a/docs/faq/README.md b/docs/faq/README.md deleted file mode 100644 index 2c8e86bd3a..0000000000 --- a/docs/faq/README.md +++ /dev/null @@ -1,148 +0,0 @@ -# Ingress FAQ - -This page contains general FAQ for Ingress, there is also a per-backend FAQ -in this directory with site specific information. - -Table of Contents -================= - -* [How is Ingress different from a Service?](#how-is-ingress-different-from-a-service) -* [I created an Ingress and nothing happens, what now?](#i-created-an-ingress-and-nothing-happens-what-now) -* [How do I deploy an Ingress controller?](#how-do-i-deploy-an-ingress-controller) -* [Are Ingress controllers namespaced?](#are-ingress-controllers-namespaced) -* [How do I disable an Ingress controller?](#how-do-i-disable-an-ingress-controller) -* [How do I run multiple Ingress controllers in the same cluster?](#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster) -* [How do I contribute a backend to the generic Ingress controller?](#how-do-i-contribute-a-backend-to-the-generic-ingress-controller) -* [Is there a catalog of existing Ingress controllers?](#is-there-a-catalog-of-existing-ingress-controllers) -* [How are the Ingress controllers tested?](#how-are-the-ingress-controllers-tested) -* [An Ingress controller E2E is failing, what should I do?](#an-ingress-controller-e2e-is-failing-what-should-i-do) -* [Is there a roadmap for Ingress features?](#is-there-a-roadmap-for-ingress-features) - -## How is Ingress different from a Service? - -The Kubernetes Service is an abstraction over endpoints (pod-ip:port pairings). -The Ingress is an abstraction over Services. This doesn't mean all Ingress -controller must route *through* a Service, but rather, that routing, security -and auth configuration is represented in the Ingress resource per Service, and -not per pod. As long as this configuration is respected, a given Ingress -controller is free to route to the DNS name of a Service, the VIP, a NodePort, -or directly to the Service's endpoints. - -## I created an Ingress and nothing happens, what now? - -Run `describe` on the Ingress. If you see create/add events, you have an Ingress -controller running in the cluster, otherwise, you either need to deploy or -restart your Ingress controller. If the events associated with an Ingress are -insufficient to debug, consult the controller specific FAQ. - -## How do I deploy an Ingress controller? - -The following platforms currently deploy an Ingress controller addon: GCE, GKE, -minikube. If you're running on any other platform, you can deploy an Ingress -controller by following [this](/examples/deployment) example. - -## Are Ingress controllers namespaced? - -Ingress is namespaced, this means 2 Ingress objects can have the same name in 2 -namespaces, and must only point to Services in its own namespace. An admin can -deploy an Ingress controller such that it only satisfies Ingress from a given -namespace, but by default, controllers will watch the entire Kubernetes cluster -for unsatisfied Ingress. - -## How do I disable an Ingress controller? - -Either shutdown the controller satisfying the Ingress, or use the -`ingress.class` annotation: - -```yaml -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: test - annotations: - kubernetes.io/ingress.class: "nginx" -spec: - tls: - - secretName: tls-secret - backend: - serviceName: echoheaders-https - servicePort: 80 -``` - -The GCE controller will only act on Ingresses with the annotation value of "gce" or empty string "" (the default value if the annotation is omitted). - -The nginx controller will only act on Ingresses with the annotation value of "nginx" or empty string "" (the default value if the annotation is omitted). - -To completely stop the Ingress controller on GCE/GKE, please see [this](gce.md#how-do-i-disable-the-gce-ingress-controller) faq. - -## How do I run multiple Ingress controllers in the same cluster? - -Multiple Ingress controllers can co-exist and key off the `ingress.class` -annotation, as shown in this faq, as well as in [this](/examples/daemonset/nginx) example. - -## How do I contribute a backend to the generic Ingress controller? - -First check the [catalog](#is-there-a-catalog-of-existing-ingress-controllers), to make sure you really need to write one. - -1. Write a [generic backend](/examples/custom-controller) -2. Keep it in your own repo, make sure it passes the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/ingress_utils.go#L129) -3. Submit an example(s) in the appropriate subdirectories [here](/examples/README.md) -4. Add it to the catalog - -## Is there a catalog of existing Ingress controllers? - -Yes, a non-comprehensive [catalog](/docs/catalog.md) exists. - -## How are the Ingress controllers tested? - -Testing for the Ingress controllers is divided between: -* Ingress repo: unit tests and pre-submit integration tests run via travis -* Kubernetes repo: [pre-submit e2e](https://k8s-testgrid.appspot.com/google-gce#gce&include-filter-by-regex=Loadbalancing), - [post-merge e2e](https://k8s-testgrid.appspot.com/google-gce#gci-gce-ingress), - [per release-branch e2e](https://k8s-testgrid.appspot.com/google-gce#gci-gce-ingress-1.5) - -The configuration for jenkins e2e tests are located [here](https://github.com/kubernetes/test-infra). -The Ingress E2Es are located [here](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/ingress.go), -each controller added to that suite must consistently pass the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/ingress_utils.go#L129). - -## An Ingress controller E2E is failing, what should I do? - -First, identify the reason for failure. - -* Look at the build log, if there's nothing obvious, search for quota issues. - * Find events logged by the controller in the build log - * Ctrl+f "quota" in the build log -* If the failure is in the GCE controller: - * Navigate to the test artifacts for that run and look at glbc.log, [eg](http://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-ingress-release-1.5/1234/artifacts/bootstrap-e2e-master/) - * Look up the `PROJECT=` line in the build log, and navigate to that project - looking for quota issues (`gcloud compute project-info describe project-name` - or navigate to the cloud console > compute > quotas) -* If the failure is for a non-cloud controller (eg: nginx) - * Make sure the firewall rules required by the controller are opened on the - right ports (80/443), since the jenkins builders run *outside* the - Kubernetes cluster. - -Note that you currently need help from a test-infra maintainer to access the GCE -test project. If you think the failures are related to project quota, cleanup -leaked resources and bump up quota before debugging the leak. - -If the preceding identification process fails, it's likely that the Ingress api -is broken upstream. Try to setup a [dev environment](/docs/dev/setup-cluster.md) from -HEAD and create an Ingress. You should be deploying the [latest](https://github.com/kubernetes/ingress/releases) -release image to the local cluster. - -If neither of these 2 strategies produces anything useful, you can either start -reverting images, or digging into the underlying infrastructure the e2es are -running on for more nefarious issues (like permission and scope changes for -some set of nodes on which an Ingress controller is running). - -## Is there a roadmap for Ingress features? - -The community is working on it. There are currently too many efforts in flight -to serialize into a flat roadmap. You might be interested in the following issues: -* Loadbalancing [umbrella issue](https://github.com/kubernetes/kubernetes/issues/24145) -* Service proxy [proposal](https://groups.google.com/forum/#!topic/kubernetes-sig-network/weni52UMrI8) -* Better [routing rules](https://github.com/kubernetes/kubernetes/issues/28443) -* Ingress [classes](https://github.com/kubernetes/kubernetes/issues/30151) - -As well as the issues in this repo. diff --git a/docs/faq/gce.md b/docs/faq/gce.md deleted file mode 100644 index 6f3ede8017..0000000000 --- a/docs/faq/gce.md +++ /dev/null @@ -1,412 +0,0 @@ -# GCE Ingress controller FAQ - -This page contains general FAQ for the GCE Ingress controller. - -Table of Contents -================= - -* [How do I deploy an Ingress controller?](#how-do-i-deploy-an-ingress-controller) -* [I created an Ingress and nothing happens, now what?](#i-created-an-ingress-and-nothing-happens-now-what) -* [What are the cloud resources created for a single Ingress?](#what-are-the-cloud-resources-created-for-a-single-ingress) -* [The Ingress controller events complain about quota, how do I increase it?](#the-ingress-controller-events-complain-about-quota-how-do-i-increase-it) -* [Why does the Ingress need a different instance group then the GKE cluster?](#why-does-the-ingress-need-a-different-instance-group-then-the-gke-cluster) -* [Why does the cloud console show 0/N healthy instances?](#why-does-the-cloud-console-show-0n-healthy-instances) -* [Can I configure GCE health checks through the Ingress?](#can-i-configure-gce-health-checks-through-the-ingress) -* [Why does my Ingress have an ephemeral ip?](#why-does-my-ingress-have-an-ephemeral-ip) -* [Can I pre-allocate a static-ip?](#can-i-pre-allocate-a-static-ip) -* [Does updating a Kubernetes secrete update the GCE TLS certs?](#does-updating-a-kubernetes-secrete-update-the-gce-tls-certs) -* [Can I tune the loadbalancing algorithm?](#can-i-tune-the-loadbalancing-algorithm) -* [Is there a maximum number of Endpoints I can add to the Ingress?](#is-there-a-maximum-number-of-endpoints-i-can-add-to-the-ingress) -* [How do I match GCE resources to Kubernetes Services?](#how-do-i-match-gce-resources-to-kubernetes-services) -* [Can I change the cluster UID?](#can-i-change-the-cluster-uid) -* [Why do I need a default backend?](#why-do-i-need-a-default-backend) -* [How does Ingress work across 2 GCE clusters?](#how-does-ingress-work-across-2-gce-clusters) -* [I shutdown a cluster without deleting all Ingresses, how do I manually cleanup?](#i-shutdown-a-cluster-without-deleting-all-ingresses-how-do-i-manually-cleanup) -* [How do I disable the GCE Ingress controller?](#how-do-i-disable-the-gce-ingress-controller) -* [What GCE resources are shared between Ingresses?](#what-gce-resources-are-shared-between-ingresses) -* [How do I debug a controller spin loop?](#host-do-i-debug-a-controller-spinloop) -* [Creating an Internal Load Balancer without existing ingress](#creating-an-internal-load-balancer-without-existing-ingress) -* [Can I use websockets?](#can-i-use-websockets) - - -## How do I deploy an Ingress controller? - -On GCP (either GCE or GKE), every Kubernetes cluster has an Ingress controller -running on the master, no deployment necessary. You can deploy a second, -different (i.e non-GCE) controller, like [this](README.md#how-do-i-deploy-an-ingress-controller). -If you wish to deploy a GCE controller as a pod in your cluster, make sure to -turn down the existing auto-deployed Ingress controller as shown in this -[example](/examples/deployment/gce/). - -## I created an Ingress and nothing happens, now what? - -Please check the following: - -1. Output of `kubectl describe`, as shown [here](README.md#i-created-an-ingress-and-nothing-happens-what-now) -2. Do your Services all have a `NodePort`? -3. Do your Services either serve an HTTP status code 200 on `/`, or have a readiness probe - as described in [this section](#can-i-configure-gce-health-checks-through-the-ingress)? -4. Do you have enough GCP quota? - -## What are the cloud resources created for a single Ingress? - -__Terminology:__ - -* [Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules): Manages the Ingress VIP -* [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies): Manages SSL certs and proxies between the VIP and backend -* [Url Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map): Routing rules -* [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service): Bridges various Instance Groups on a given Service NodePort -* [Instance Group](https://cloud.google.com/compute/docs/instance-groups/): Collection of Kubernetes nodes - -The pipeline is as follows: - -``` -Global Forwarding Rule -> TargetHTTPProxy - | \ Instance Group (us-east1) - Static IP URL Map - Backend Service(s) - Instance Group (us-central1) - | / ... -Global Forwarding Rule -> TargetHTTPSProxy - ssl cert -``` - -In addition to this pipeline: -* Each Backend Service requires a HTTP or HTTPS health check to the NodePort of the Service -* Each port on the Backend Service has a matching port on the Instance Group -* Each port on the Backend Service is exposed through a firewall-rule open - to the GCE LB IP ranges (`130.211.0.0/22` and `35.191.0.0/16`) - -## The Ingress controller events complain about quota, how do I increase it? - -GLBC is not aware of your GCE quota. As of this writing users get 3 -[GCE Backend Services](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) -by default. If you plan on creating Ingresses for multiple Kubernetes Services, -remember that each one requires a backend service, and request quota. Should you -fail to do so the controller will poll periodically and grab the first free -backend service slot it finds. You can view your quota: - -```console -$ gcloud compute project-info describe --project myproject -``` -See [GCE documentation](https://cloud.google.com/compute/docs/resource-quotas#checking_your_quota) -for how to request more. - -## Why does the Ingress need a different instance group then the GKE cluster? - -The controller adds/removes Kubernetes nodes that are `NotReady` from the lb -instance group. We cannot simply rely on health checks to achieve this for -a few reasons. - -First, older Kubernetes versions (<=1.3) did not mark -endpoints on unreachable nodes as NotReady. Meaning if the Kubelet didn't -heart beat for 10s, the node was marked NotReady, but there was no other signal -at the Service level to stop routing requests to endpoints on that node. In -later Kubernetes version this is handled a little better, if the Kubelet -doesn't heart beat for 10s it's marked NotReady, if it stays in NotReady -for 40s all endpoints are marked NotReady. So it is still advantageous -to pull the node out of the GCE LB Instance Group in 10s, because we -save 30s of bad requests. - -Second, continuing to send requests to NotReady nodes is not a great idea. -The NotReady condition is an aggregate of various factors. For example, -a NotReady node might still pass health checks but have the wrong -nodePort to endpoint mappings. The health check will pass as long as *something* -returns a HTTP 200. - -## Why does the cloud console show 0/N healthy instances? - -Some nodes are reporting negatively on the GCE HTTP health check. -Please check the following: -1. Try to access any node-ip:node-port/health-check-url -2. Try to access any pubic-ip:node-port/health-check-url -3. Make sure you have a firewall-rule allowing access to the GCE LB IP range - (created by the Ingress controller on your behalf) -4. Make sure the right NodePort is opened in the Backend Service, and - consequently, plugged into the lb instance group - -## Can I configure GCE health checks through the Ingress? - -Currently health checks are not exposed through the Ingress resource, they're -handled at the node level by Kubernetes daemons (kube-proxy and the kubelet). -However the GCE L7 lb still requires a HTTP(S) health check to measure node -health. By default, this health check points at `/` on the nodePort associated -with a given backend. Note that the purpose of this health check is NOT to -determine when endpoint pods are overloaded, but rather, to detect when a -given node is incapable of proxying requests for the Service:nodePort -altogether. Overloaded endpoints are removed from the working set of a -Service via readiness probes conducted by the kubelet. - -If `/` doesn't work for your application, you can have the Ingress controller -program the GCE health check to point at a readiness probe as shows in [this](/examples/health-checks/) -example. - -We plan to surface health checks through the API soon. - -## Why does my Ingress have an ephemeral ip? - -GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances-and-network#ephemeraladdress) -and [static](https://cloud.google.com/compute/docs/instances-and-network#reservedaddress) IPs. A production -website would always want a static IP, which ephemeral IPs are cheaper (both in terms of quota and cost), and -are therefore better suited for experimentation. -* Creating a HTTP Ingress (i.e an Ingress without a TLS section) allocates an ephemeral IP for 2 reasons: - * we want to encourage secure defaults - * static-ips have limited quota and pure HTTP ingress is often used for testing -* Creating an Ingress with a TLS section allocates a static IP -* Modifying an Ingress and adding a TLS section allocates a static IP, but the - IP *will* change. This is a beta limitation. -* You can [promote](https://cloud.google.com/compute/docs/instances-and-network#promote_ephemeral_ip) - an ephemeral to a static IP by hand, if required. - -## Can I pre-allocate a static-ip? - -Yes, please see [this](/examples/static-ip) example. - -## Does updating a Kubernetes secret update the GCE TLS certs? - -Yes, expect O(30s) delay. - -The controller should create a second ssl certificate suffixed with `-1` and -atomically swap it with the ssl certificate in your taret proxy, then delete -the obselete ssl certificate. - -## Can I tune the loadbalancing algorithm? - -Right now, a kube-proxy nodePort is a necessary condition for Ingress on GCP. -This is because the cloud lb doesn't understand how to route directly to your -pods. Incorporating kube-proxy and cloud lb algorithms so they cooperate -toward a common goal is still a work in progress. If you really want fine -grained control over the algorithm, you should deploy the [nginx controller](/examples/deployment/nginx). - -## Is there a maximum number of Endpoints I can add to the Ingress? - -This limit is directly related to the maximum number of endpoints allowed in a -Kubernetes cluster, not the the HTTP LB configuration, since the HTTP LB sends -packets to VMs. Ingress is not yet supported on single zone clusters of size > -1000 nodes ([issue](https://github.com/kubernetes/contrib/issues/1724)). If -you'd like to use Ingress on a large cluster, spread it across 2 or more zones -such that no single zone contains more than a 1000 nodes. This is because there -is a [limit](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances) -to the number of instances one can add to a single GCE Instance Group. In a -multi-zone cluster, each zone gets its own instance group. - -## How do I match GCE resources to Kubernetes Services? - -The format followed for creating resources in the cloud is: -`k8s---`, where `nodeport` is the output of -```console -$ kubectl get svc --template '{{range $i, $e := .spec.ports}}{{$e.nodePort}},{{end}}' -``` - -`cluster-hash` is the output of: -```console -$ kubectl get configmap -o yaml --namespace=kube-system | grep -i " data:" -A 1 - data: - uid: cad4ee813812f808 -``` - -and `resource-name` is a short prefix for one of the resources mentioned [here](#what-are-the-cloud-resources-created-for-a-single-ingress) -(eg: `be` for backends, `hc` for health checks). If a given resource is not tied -to a single `node-port`, its name will not include the same. - -## Can I change the cluster UID? - -The Ingress controller configures itself to add the UID it stores in a configmap in the `kube-system` namespace. - -```console -$ kubectl --namespace=kube-system get configmaps -NAME DATA AGE -ingress-uid 1 12d - -$ kubectl --namespace=kube-system get configmaps -o yaml -apiVersion: v1 -items: -- apiVersion: v1 - data: - uid: UID - kind: ConfigMap -... -``` - -You can pick a different UID, but this requires you to: - -1. Delete existing Ingresses -2. Edit the configmap using `kubectl edit` -3. Recreate the same Ingress - -After step 3 the Ingress should come up using the new UID as the suffix of all cloud resources. You can't simply change the UID if you have existing Ingresses, because -renaming a cloud resource requires a delete/create cycle that the Ingress controller does not currently automate. Note that the UID in step 1 might be an empty string, -if you had a working Ingress before upgrading to Kubernetes 1.3. - -__A note on setting the UID__: The Ingress controller uses the token `--` to split a machine generated prefix from the UID itself. If the user supplied UID is found to -contain `--` the controller will take the token after the last `--`, and use an empty string if it ends with `--`. For example, if you insert `foo--bar` as the UID, -the controller will assume `bar` is the UID. You can either edit the configmap and set the UID to `bar` to match the controller, or delete existing Ingresses as described -above, and reset it to a string bereft of `--`. - -## Why do I need a default backend? - -All GCE URL maps require at least one [default backend](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case), which handles all -requests that don't match a host/path. In Ingress, the default backend is -optional, since the resource is cross-platform and not all platforms require -a default backend. If you don't specify one in your yaml, the GCE ingress -controller will inject the default-http-backend Service that runs in the -`kube-system` namespace as the default backend for the GCE HTTP lb allocated -for that Ingress resource. - -Some caveats concerning the default backend: - -* It is the only Backend Service that doesn't directly map to a user specified -NodePort Service -* It's created when the first Ingress is created, and deleted when the last -Ingress is deleted, since we don't want to waste quota if the user is not going -to need L7 loadbalancing through Ingress -* It has a http health check pointing at `/healthz`, not the default `/`, because -`/` serves a 404 by design - - -## How does Ingress work across 2 GCE clusters? - -See federation [documentation](http://kubernetes.io/docs/user-guide/federation/federated-ingress/). - -## I shutdown a cluster without deleting all Ingresses, how do I manually cleanup? - -If you kill a cluster without first deleting Ingresses, the resources will leak. -If you find yourself in such a situation, you can delete the resources by hand: - -1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing" -2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID -3. Delete it, check the boxes to also cascade the deletion down to associated resources (eg: backend-services) -4. Switch to the "Compute Engine" tab, then choose "Instance Groups" -5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID - -We plan to fix this [soon](https://github.com/kubernetes/kubernetes/issues/16337). - -## How do I disable the GCE Ingress controller? - -As of Kubernetes 1.3, GLBC runs as a static pod on the master. -If you want to disable it, you have 3 options: - -### Soft disable - -Option 1. Have it no-op for an Ingress resource based on the `ingress.class` annotation as shown [here](README.md#how-do-i-disable-an-ingress-controller). -This can also be used to use one of the other Ingress controllers at the same time as the GCE controller. - -### Hard disable - -Option 2. SSH into the GCE master node and delete the GLBC manifest file found at `/etc/kubernetes/manifests/glbc.manifest`. - -Option 3. Disable the addon in GKE via `gcloud`: - -#### Disabling GCE ingress on cluster creation - -Disable the addon in GKE at cluster bring-up time through the `disable-addons` flag: - -```console -gcloud container clusters create mycluster --network "default" --num-nodes 1 \ ---machine-type n1-standard-2 \ ---zone $ZONE \ ---disk-size 50 \ ---scopes storage-full \ ---disable-addons HttpLoadBalancing -``` - -#### Disabling GCE ingress in an existing cluster - -Disable the addon in GKE for an existing cluster through the `update-addons` flag: - -```console -gcloud container clusters update mycluster --update-addons HttpLoadBalancing=DISABLED -``` - -## What GCE resources are shared between Ingresses? - -Every Ingress creates a pipeline of GCE cloud resources behind an IP. Some of -these are shared between Ingresses out of necessity, while some are shared -because there was no perceived need for duplication (all resources consume -quota and usually cost money). - -Shared: - -* Backend Services: because of low quota and high reuse. A single Service in a -Kubernetes cluster has one NodePort, common throughout the cluster. GCE has -a hard limit of the number of allowed BackendServices, so if multiple Ingresses -all point to a single Service, that creates a single BackendService in GCE -pointing to that Service's NodePort. - -* Instance Group: since an instance can only be part of a single loadbalanced -Instance Group, these must be shared. There is 1 Ingress Instance Group per -zone containing Kubernetes nodes. - -* Health Checks: currently the health checks point at the NodePort -of a BackendService. They don't *need* to be shared, but they are since -BackendServices are shared. - -* Firewall rule: In a non-federated cluster there is a single firewall rule -that covers health check traffic from the range of [GCE loadbalancer IPs](https://cloud.google.com/compute/docs/load-balancing/http/#troubleshooting) -to Service nodePorts. - -Unique: - -Currently, a single Ingress on GCE creates a unique IP and url map. In this -model the following resources cannot be shared: -* Url Map -* Target HTTP(S) Proxies -* SSL Certificates -* Static-ip -* Forwarding rules - - -## How do I debug a controller spinloop? - -The most likely cause of a controller spin loop is some form of GCE validation -failure, eg: -* It's trying to delete a BackendService already in use, say in a UrlMap -* It's trying to add an Instance to more than 1 loadbalanced InstanceGroups -* It's trying to flip the loadbalancing algorithm on a BackendService to RATE, -when some other BackendService is pointing at the same InstanceGroup and asking -for UTILIZATION - -In all such cases, the work queue will put a single key (ingress namespace/name) -that's getting continuously requeued into exponential backoff. However, currently -the Informers that watch the Kubernetes api are setup to periodically resync, -so even though a particular key is in backoff, we might end up syncing all other -keys every, say, 10m, which might trigger the same validation-error-condition -when syncing a shared resource. - -## Creating an Internal Load Balancer without existing ingress -**How the GCE ingress controller Works** -To assemble an L7 Load Balancer, the ingress controller creates an [unmanaged instance-group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances) named `k8s-ig--{UID}` and adds every known minion node to the group. For every service specified in all ingresses, a backend service is created to point to that instance group. - -**How the Internal Load Balancer Works** -K8s does not yet assemble ILB's for you, but you can manually create one via the GCP Console. The ILB is composed of a regional forwarding rule and a regional backend service. Similar to the L7 LB, the backend-service points to an unmanaged instance-group containing your K8s nodes. - -**The Complication** -GCP will only allow one load balanced unmanaged instance-group for a given instance. -If you manually created an instance group named something like `my-kubernetes-group` containing all your nodes and put an ILB in front of it, then you will probably encounter a GCP error when setting up an ingress resource. The controller doesn't know to use your `my-kubernetes-group` group and will create it's own. Unfortunately, it won't be able to add any nodes to that group because they already belong to the ILB group. - -As mentioned before, the instance group name is composed of a hard-coded prefix `k8s-ig--` and a cluster-specific UID. The ingress controller will check the K8s configmap for an existing UID value at process start. If it doesn't exist, the controller will create one randomly and update the configmap. - -#### Solutions -**Want an ILB and Ingress?** -If you plan on creating both ingresses and internal load balancers, simply create the ingress resource first then use the GCP Console to create an ILB pointing to the existing instance group. - -**Want just an ILB for now, ingress maybe later?** -Retrieve the UID via configmap, create an instance-group per used zone, then add all respective nodes to the group. -```shell -# Fetch instance group name from config map -GROUPNAME=`kubectl get configmaps ingress-uid -o jsonpath='k8s-ig--{.data.uid}' --namespace=kube-system` - -# Create an instance group for every zone you have nodes. If you use GKE, this is probably a single zone. -gcloud compute instance-groups unmanaged create $GROUPNAME --zone {ZONE} - -# Look at your list of your nodes -kubectl get nodes - -# Add minion nodes that exist in zone X to the instance group in zone X. (Do not add the master!) -gcloud compute instance-groups unmanaged add-instances $GROUPNAME --zone {ZONE} --instances=A,B,C... -``` -You can now follow the GCP Console wizard for creating an internal load balancer and point to the `k8s-ig--{UID}` instance group. - -## Can I use websockets? -Yes! -The GCP HTTP(S) Load Balancer supports websockets. You do not need to change your http server or Kubernetes deployment. You will need to manually configure the created Backend Service's `timeout` setting. This value is the interpreted as the max connection duration. The default value of 30 seconds is probably too small for you. You can increase it to the supported maximum: 86400 (a day) through the GCP Console or the gcloud CLI. - -View the [example](/controllers/gce/examples/websocket/). diff --git a/docs/faq/nginx.md b/docs/faq/nginx.md deleted file mode 100644 index 20e06d52ef..0000000000 --- a/docs/faq/nginx.md +++ /dev/null @@ -1,3 +0,0 @@ -# Nginx Ingress controller FAQ - -Placeholder diff --git a/examples/tls-termination/elb-nginx/images/listener.png b/docs/images/elb-l7-listener.png similarity index 100% rename from examples/tls-termination/elb-nginx/images/listener.png rename to docs/images/elb-l7-listener.png diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 603b2d0a51..d04c99a7a6 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -1,15 +1,42 @@ -# Troubleshooting +# Debug & Troubleshooting +## Debug -## Authentication to the Kubernetes API Server +Using the flag `--v=XX` it is possible to increase the level of logging. +In particular: + +- `--v=2` shows details using `diff` about the changes in the configuration in nginx + +```console +I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf +I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000 ++++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000 +@@ -235,7 +235,6 @@ + + upstream default-http-svcx { + least_conn; +- server 10.2.112.124:5000; + server 10.2.208.50:5000; + + } +I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading... +``` + +- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format +- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html) + +## Troubleshooting + + +### Authentication to the Kubernetes API Server A number of components are involved in the authentication process and the first step is to narrow @@ -60,8 +87,7 @@ Kubernetes Workstation +---------------------------------------------------+ +------------------+ ``` - -## Service Account +### Service Account If using a service account to connect to the API server, Dashboard expects the file `/var/run/secrets/kubernetes.io/serviceaccount/token` to be present. It provides a secret token that is required to authenticate with the API server. @@ -139,13 +165,12 @@ If it is not working, there are two possible reasons: 1. The contents of the tokens are invalid. Find the secret name with `kubectl get secrets | grep service-account` and delete it with `kubectl delete secret `. It will automatically be recreated. -2. You have a non-standard Kubernetes installation and the file containing the token -may not be present. The API server will mount a volume containing this file, but -only if the API server is configured to use the ServiceAccount admission controller. -If you experience this error, verify that your API server is using the ServiceAccount -admission controller. If you are configuring the API server by hand, you can set -this with the `--admission-control` parameter. Please note that you should use -other admission controllers as well. Before configuring this option, you should +2. You have a non-standard Kubernetes installation and the file containing the token may not be present. +The API server will mount a volume containing this file, but only if the API server is configured to use +the ServiceAccount admission controller. +If you experience this error, verify that your API server is using the ServiceAccount admission controller. +If you are configuring the API server by hand, you can set this with the `--admission-control` parameter. +Please note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: @@ -153,148 +178,6 @@ More information: * [User Guide: Service Accounts](http://kubernetes.io/docs/user-guide/service-accounts/) * [Cluster Administrator Guide: Managing Service Accounts](http://kubernetes.io/docs/admin/service-accounts-admin/) -## Kubeconfig -If you want to use a kubeconfig file for authentication, create a deployment file similar to the one below: - -*Note:* the important part is the flag `--kubeconfig=/etc/kubernetes/kubeconfig.yaml`. - - -``` -kind: Service -apiVersion: v1 -metadata: - name: nginx-default-backend - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - ports: - - port: 80 - targetPort: http - selector: - app: nginx-default-backend - ---- - -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: nginx-default-backend - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - replicas: 1 - template: - metadata: - labels: - k8s-addon: ingress-nginx.addons.k8s.io - app: nginx-default-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - image: gcr.io/google_containers/defaultbackend:1.0 - volumeMounts: - - mountPath: /etc/kubernetes - name: kubeconfig - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi - ports: - - name: http - containerPort: 8080 - protocol: TCP - ---- - -kind: ConfigMap -apiVersion: v1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io - ---- - -kind: Service -apiVersion: v1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - type: LoadBalancer - selector: - app: ingress-nginx - ports: - - name: http - port: 80 - targetPort: http - - name: https - port: 443 - targetPort: https - ---- - -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - replicas: 1 - template: - metadata: - labels: - app: ingress-nginx - k8s-addon: ingress-nginx.addons.k8s.io - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: ingress-nginx - imagePullPolicy: Always - ports: - - name: http - containerPort: 80 - protocol: TCP - - name: https - containerPort: 443 - protocol: TCP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend - - --configmap=$(POD_NAMESPACE)/ingress-nginx - - --kubeconfig=/etc/kubernetes/kubeconfig.yaml - volumes: - - name: "kubeconfig" - hostPath: - path: "/etc/kubernetes/" -``` +### Kubeconfig +If you want to use a kubeconfig file for authentication, follow the deploy procedure and +add the flag `--kubeconfig=/etc/kubernetes/kubeconfig.yaml` to the deployment diff --git a/docs/user-guide/annotations.md b/docs/user-guide/annotations.md new file mode 100644 index 0000000000..c23c185dbb --- /dev/null +++ b/docs/user-guide/annotations.md @@ -0,0 +1,327 @@ +# Annotations + +The following annotations are supported: + +|Name | type | +|---------------------------|------| +|[ingress.kubernetes.io/add-base-url](#rewrite)|true or false| +|[ingress.kubernetes.io/app-root](#rewrite)|string| +|[ingress.kubernetes.io/affinity](#session-affinity)|cookie| +|[ingress.kubernetes.io/auth-realm](#authentication)|string| +|[ingress.kubernetes.io/auth-secret](#authentication)|string| +|[ingress.kubernetes.io/auth-type](#authentication)|basic or digest| +|[ingress.kubernetes.io/auth-tls-secret](#certificate-authentication)|string| +|[ingress.kubernetes.io/auth-tls-verify-depth](#certificate-authentication)|number| +|[ingress.kubernetes.io/auth-tls-verify-client](#certificate-authentication)|string| +|[ingress.kubernetes.io/auth-tls-error-page](#certificate-authentication)|string| +|[ingress.kubernetes.io/auth-url](#external-authentication)|string| +|[ingress.kubernetes.io/base-url-scheme](#rewrite)|string| +|[ingress.kubernetes.io/client-body-buffer-size](#client-body-buffer-size)|string| +|[ingress.kubernetes.io/configuration-snippet](#configuration-snippet)|string| +|[ingress.kubernetes.io/default-backend](#default-backend)|string| +|[ingress.kubernetes.io/enable-cors](#enable-cors)|true or false| +|[ingress.kubernetes.io/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false| +|[ingress.kubernetes.io/from-to-www-redirect](#redirect-from-to-www)|true or false| +|[ingress.kubernetes.io/limit-connections](#rate-limiting)|number| +|[ingress.kubernetes.io/limit-rps](#rate-limiting)|number| +|[ingress.kubernetes.io/proxy-body-size](#custom-max-body-size)|string| +|[ingress.kubernetes.io/proxy-connect-timeout](#custom-timeouts)|number| +|[ingress.kubernetes.io/proxy-send-timeout](#custom-timeouts)|number| +|[ingress.kubernetes.io/proxy-read-timeout](#custom-timeouts)|number| +|[ingress.kubernetes.io/proxy-request-buffering](#custom-timeouts)|string| +|[ingress.kubernetes.io/rewrite-target](#rewrite)|URI| +|[ingress.kubernetes.io/secure-backends](#secure-backends)|true or false| +|[ingress.kubernetes.io/server-alias](#server-alias)|string| +|[ingress.kubernetes.io/server-snippet](#server-snippet)|string| +|[ingress.kubernetes.io/service-upstream](#service-upstream)|true or false| +|[ingress.kubernetes.io/session-cookie-name](#cookie-affinity)|string| +|[ingress.kubernetes.io/session-cookie-hash](#cookie-affinity)|string| +|[ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false| +|[ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|true or false| +|[ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number| +|[ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number| +|[ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string| +|[ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR| + +### Rewrite + +In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. +Set the annotation `ingress.kubernetes.io/rewrite-target` to the path expected by the service. + +If the application contains relative links it is possible to add an additional annotation `ingress.kubernetes.io/add-base-url` that will prepend a [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) in the header of the returned HTML from the backend. + +If the scheme of [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) need to be specific, set the annotation `ingress.kubernetes.io/base-url-scheme` to the scheme such as `http` and `https`. + +If the Application Root is exposed in a different path and needs to be redirected, set the annotation `ingress.kubernetes.io/app-root` to redirect requests for `/`. + +Please check the [rewrite](../examples/rewrite/README.md) example. + +### Session Affinity + +The annotation `ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. +The only affinity type available for NGINX is `cookie`. + +Please check the [affinity](../examples/affinity/README.md) example. + +### Authentication + +Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key `auth`. + +The annotations are: +``` +ingress.kubernetes.io/auth-type: [basic|digest] +``` + +Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617). + +``` +ingress.kubernetes.io/auth-secret: secretName +``` + +The name of the secret that contains the usernames and passwords with access to the `path`s defined in the Ingress Rule. +The secret must be created in the same namespace as the Ingress rule. + +``` +ingress.kubernetes.io/auth-realm: "realm string" +``` + +Please check the [auth](../examples/auth/basic/README.md) example. + +### Custom NGINX upstream checks + +NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enable the configuration of each server in the upstream. The Ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` and `upstream-fail-timeout` in the NGINX ConfigMap or in a particular Ingress rule. `upstream-max-fails` defaults to 0. This means NGINX will respect the container's `readinessProbe` if it is defined. If there is no probe and no values for `upstream-max-fails` NGINX will continue to send traffic to the container. + +**With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.** + +To use custom values in an Ingress rule define these annotations: + +`ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the `upstream-fail-timeout` parameter to consider the server unavailable. + +`ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable. + +In NGINX, backend server pools are called "[upstreams](http://nginx.org/en/docs/http/ngx_http_upstream_module.html)". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined. + +**Important:** All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers. + +Please check the [custom upstream check](../examples/customization/custom-upstream-check/README.md) example. + +### Custom NGINX upstream hashing + +NGINX supports load balancing by client-server mapping based on [consistent hashing](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](http://www.last.fm/user/RJ/journal/2007/04/10/392555/) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. + +To enable consistent hashing for a backend: + +`ingress.kubernetes.io/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example `ingress.kubernetes.io/upstream-hash-by: "$request_uri"` to consistently hash upstream requests by the current request URI. + +### Certificate Authentication + +It's possible to enable Certificate-Based Authentication (Mutual Authentication) using additional annotations in Ingress Rule. + +The annotations are: +``` +ingress.kubernetes.io/auth-tls-secret: secretName +``` + +The name of the secret that contains the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this ingress. It's composed of namespace/secretName. + +``` +ingress.kubernetes.io/auth-tls-verify-depth +``` + +The validation depth between the provided client certificate and the Certification Authority chain. + +``` +ingress.kubernetes.io/auth-tls-verify-client +``` + +Enables verification of client certificates. + +``` +ingress.kubernetes.io/auth-tls-error-page +``` + +The URL/Page that user should be redirected in case of a Certificate Authentication Error + +Please check the [tls-auth](../examples/auth/client-certs/README.md) example. + +### Configuration snippet + +Using this annotation you can add additional configuration to the NGINX location. For example: + +```yaml +ingress.kubernetes.io/configuration-snippet: | + more_set_headers "Request-Id: $request_id"; +``` + +### Default Backend + +The ingress controller requires a default backend. This service is handle the response when the service in the Ingress rule does not have endpoints. +This is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation `ingress.kubernetes.io/default-backend: ` to specify a custom default backend. + +### Enable CORS + +To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation `ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server location enabling this functionality. +For more information please check https://enable-cors.org/server_nginx.html + +### Server Alias + +To add Server Aliases to an Ingress rule add the annotation `ingress.kubernetes.io/server-alias: ""`. +This will create a server with the same configuration, but a different server_name as the provided host. + +*Note:* A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias +annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created +the new server configuration will take place over the alias configuration. + +For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name + +### Server snippet + +Using the annotation `ingress.kubernetes.io/server-snippet` it is possible to add custom configuration in the server configuration block. + +```yaml +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + ingress.kubernetes.io/server-snippet: | +set $agentflag 0; + +if ($http_user_agent ~* "(Mobile)" ){ + set $agentflag 1; +} + +if ( $agentflag = 1 ) { + return 301 https://m.example.com; +} +``` + +**Important:** This annotation can be used only once per host + +### Client Body Buffer Size + +Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, +the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. +This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is +applied to each location provided in the ingress rule. + +*Note:* The annotation value must be given in a valid format otherwise the +For example to set the client-body-buffer-size the following can be done: + +* `ingress.kubernetes.io/client-body-buffer-size: "1000"` # 1000 bytes +* `ingress.kubernetes.io/client-body-buffer-size: 1k` # 1 kilobyte +* `ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte +* `ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte +* `ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte + +For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size + +### External Authentication + +To use an existing service that provides authentication the Ingress rule can be annotated with `ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent. +Additionally it is possible to set `ingress.kubernetes.io/auth-method` to specify the HTTP method to use (GET or POST) and `ingress.kubernetes.io/auth-send-body` to true or false (default). + +```yaml +ingress.kubernetes.io/auth-url: "URL to the authentication service" +``` + +Please check the [external-auth](../examples/auth/external-auth/README.md) example. + +### Rate limiting + +The annotations `ingress.kubernetes.io/limit-connections`, `ingress.kubernetes.io/limit-rps`, and `ingress.kubernetes.io/limit-rpm` define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus). + +`ingress.kubernetes.io/limit-connections`: number of concurrent connections allowed from a single IP address. + +`ingress.kubernetes.io/limit-rps`: number of connections that may be accepted from a given IP each second. + +`ingress.kubernetes.io/limit-rpm`: number of connections that may be accepted from a given IP each minute. + +You can specify the client IP source ranges to be excluded from rate-limiting through the `ingress.kubernetes.io/limit-whitelist` annotation. The value is a comma separated list of CIDRs. + +If you specify multiple annotations in a single Ingress rule, `limit-rpm`, and then `limit-rps` takes precedence. + +The annotation `ingress.kubernetes.io/limit-rate`, `ingress.kubernetes.io/limit-rate-after` define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. + +`ingress.kubernetes.io/limit-rate-after`: sets the initial amount after which the further transmission of a response to a client will be rate limited. + +`ingress.kubernetes.io/limit-rate`: rate of request that accepted from a client each second. + +To configure this setting globally for all Ingress rules, the `limit-rate-after` and `limit-rate` value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting. + +### SSL Passthrough + +The annotation `ingress.kubernetes.io/ssl-passthrough` allows to configure TLS termination in the pod and not in NGINX. + +**Important:** + +- Using the annotation `ingress.kubernetes.io/ssl-passthrough` invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP). +- The use of this annotation requires the flag `--enable-ssl-passthrough` (By default it is disabled) + +### Secure backends + +By default NGINX uses `http` to reach the services. Adding the annotation `ingress.kubernetes.io/secure-backends: "true"` in the Ingress rule changes the protocol to `https`. + +### Service Upstream + +By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue [#257](https://github.com/kubernetes/ingress/issues/257). + +#### Known Issues + +If the `service-upstream` annotation is specified the following things should be taken into consideration: + +* Sticky Sessions will not work as only round-robin load balancing is supported. +* The `proxy_next_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream. + +### Server-side HTTPS enforcement through redirect + +By default the controller redirects (301) to `HTTPS` if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use `ssl-redirect: "false"` in the NGINX config map. + +To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. + +When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to `HTTPS` even when there is not TLS cert available. This can be achieved by using the `ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource. + +### Redirect from to www + +In some scenarios is required to redirect from `www.domain.com` to `domain.com` or viceversa. +To enable this feature use the annotation `ingress.kubernetes.io/from-to-www-redirect: "true"` + +**Important:** +If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted. + +### Whitelist source range + +You can specify the allowed client IP source ranges through the `ingress.kubernetes.io/whitelist-source-range` annotation. The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), e.g. `10.0.0.0/24,172.10.0.1`. + +To configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the NGINX ConfigMap. + +*Note:* Adding an annotation to an Ingress rule overrides any global restriction. + +### Cookie affinity + +If you use the ``cookie`` type you can also specify the name of the cookie that will be used to route the requests with the annotation `ingress.kubernetes.io/session-cookie-name`. The default is to create a cookie named 'route'. + +In case of NGINX the annotation `ingress.kubernetes.io/session-cookie-hash` defines which algorithm will be used to 'hash' the used upstream. Default value is `md5` and possible values are `md5`, `sha1` and `index`. +The `index` option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to! + +In NGINX this feature is implemented by the third party module [nginx-sticky-module-ng](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng). The workflow used to define which upstream server will be used is explained [here](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/raw/08a395c66e425540982c00482f55034e1fee67b6/docs/sticky.pdf) + +### Custom timeouts + +Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. +In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: + +- `ingress.kubernetes.io/proxy-connect-timeout` +- `ingress.kubernetes.io/proxy-send-timeout` +- `ingress.kubernetes.io/proxy-read-timeout` +- `ingress.kubernetes.io/proxy-request-buffering` + +### Custom max body size + +For NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client_max_body_size`](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size). + +To configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the NGINX ConfigMap. +To use custom values in an Ingress rule define these annotation: + +```yaml +ingress.kubernetes.io/proxy-body-size: 8m +``` diff --git a/docs/user-guide/cli-arguments.md b/docs/user-guide/cli-arguments.md new file mode 100644 index 0000000000..113fdc9aa6 --- /dev/null +++ b/docs/user-guide/cli-arguments.md @@ -0,0 +1,55 @@ +# Command line arguments + +```console +Usage of : + --alsologtostderr log to standard error as well as files + --apiserver-host string The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside a Kubernetes cluster and local discovery is attempted. + --configmap string Name of the ConfigMap that contains the custom configuration to use + --default-backend-service string Service used to serve a 404 page for the default backend. Takes the form + namespace/name. The controller uses the first node port of this Service for + the default backend. + --default-server-port int Default port to use for exposing the default server (catch all) (default 8181) + --default-ssl-certificate string Name of the secret + that contains a SSL certificate to be used as default for a HTTPS catch-all server + --disable-node-list Disable querying nodes. If --force-namespace-isolation is true, this should also be set. + --election-id string Election id to use for status update. (default "ingress-controller-leader") + --enable-ssl-passthrough Enable SSL passthrough feature. Default is disabled + --force-namespace-isolation Force namespace isolation. This flag is required to avoid the reference of secrets or + configmaps located in a different namespace than the specified in the flag --watch-namespace. + --health-check-path string Defines + the URL to be used as health check inside in the default server in NGINX. (default "/healthz") + --healthz-port int port for healthz endpoint. (default 10254) + --http-port int Indicates the port to use for HTTP traffic (default 80) + --https-port int Indicates the port to use for HTTPS traffic (default 443) + --ingress-class string Name of the ingress class to route through this controller. + --kubeconfig string Path to kubeconfig file with authorization and master location information. + --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) + --log_dir string If non-empty, write log files in this directory + --logtostderr log to standard error instead of files + --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) + --publish-service string Service fronting the ingress controllers. Takes the form + namespace/name. The controller will set the endpoint records on the + ingress objects to reflect those on the service. + --sort-backends Defines if backends and it's endpoints should be sorted + --ssl-passtrough-proxy-port int Default port to use internally for SSL when SSL Passthgough is enabled (default 442) + --status-port int Indicates the TCP port to use for exposing the nginx status page (default 18080) + --stderrthreshold severity logs at or above this threshold go to stderr (default 2) + --sync-period duration Relist and confirm cloud resources this often. Default is 10 minutes (default 10m0s) + --tcp-services-configmap string Name of the ConfigMap that contains the definition of the TCP services to expose. + The key in the map indicates the external port to be used. The value is the name of the + service with the format namespace/serviceName and the port of the service could be a + number of the name of the port. + The ports 80 and 443 are not allowed as external ports. This ports are reserved for the backend + --udp-services-configmap string Name of the ConfigMap that contains the definition of the UDP services to expose. + The key in the map indicates the external port to be used. The value is the name of the + service with the format namespace/serviceName and the port of the service could be a + number of the name of the port. + --update-status Indicates if the + ingress controller should update the Ingress status IP/hostname. Default is true (default true) + --update-status-on-shutdown Indicates if the + ingress controller should update the Ingress status IP/hostname when the controller + is being stopped. Default is true (default true) + -v, --v Level log level for V logs + --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging + --watch-namespace string Namespace to watch for Ingress. Default is to watch all namespaces +``` diff --git a/docs/user-guide/configmap.md b/docs/user-guide/configmap.md new file mode 100644 index 0000000000..5a734e1501 --- /dev/null +++ b/docs/user-guide/configmap.md @@ -0,0 +1,442 @@ +# NGINX Ingress controller configuration ConfigMap + +--- + +#### proxy-body-size: + +Sets the maximum allowed size of the client request body. +See NGINX [client_max_body_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size). + +#### custom-http-errors: + +Enables which HTTP codes should be passed for processing with the [error_page directive](http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page). +Setting at least one code also enables [proxy_intercept_errors](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors) which are required to process error_page. + +Example usage: `custom-http-errors: 404,415` + +#### disable-access-log + +Disables the Access Log from the entire Ingress Controller. This is 'false' by default. + +#### access-log-path + +Access log path. Goes to '/var/log/nginx/access.log' by default. http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log + +#### error-log-path + +Error log path. Goes to '/var/log/nginx/error.log' by default. http://nginx.org/en/docs/ngx_core_module.html#error_log + +#### enable-modsecurity + +Enables the modsecurity module for NGINX +By default this is disabled. + +#### enable-owasp-modsecurity-crs + +Eenables the OWASP ModSecurity Core Rule Set (CRS) +By default this is disabled. + +#### disable-ipv6: + +Disable listening on IPV6. +By default this is disabled. + +#### enable-dynamic-tls-records + +Enables dynamically sized TLS records to improve time-to-first-byte. +By default this is enabled. +See [CloudFlare's blog](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency) for more information. + +#### enable-underscores-in-headers + +Enables underscores in header names. +By default this is disabled. + +#### enable-vts-status + +Allows the replacement of the default status page with a third party module named [nginx-module-vts](https://github.com/vozlt/nginx-module-vts). +By default this is disabled. + +#### error-log-level + +Configures the logging level of errors. Log levels above are listed in the order of increasing severity. + +_References:_ + +- http://nginx.org/en/docs/ngx_core_module.html#error_log + +#### gzip-types + +Sets the MIME types in addition to "text/html" to compress. The special value "\*" matches any MIME type. +Responses with the "text/html" type are always compressed if `use-gzip` is enabled. + +#### hsts + +Enables or disables the header HSTS in servers running SSL. +HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. + +_References:_ + +- https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security +- https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server + +#### hsts-include-subdomains + +Enables or disables the use of HSTS in all the subdomains of the server-name. + +#### hsts-max-age + +Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. + +#### hsts-preload + +Enables or disables the preload attribute in the HSTS feature (when it is enabled) + +#### ignore-invalid-headers + +Set if header fields with invalid names should be ignored. +By default this is enabled. + +#### keep-alive: + +Sets the time during which a keep-alive client connection will stay open on the server side. +The zero value disables keep-alive client connections. + +_References:_ + +- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout + +#### load-balance + +Sets the algorithm to use for load balancing. +The value can either be: + +- round_robin: to use the default round robin loadbalancer +- least_conn: to use the least connected method +- ip_hash: to use a hash of the server for routing. + +The default is least_conn. + +_References:_ + +- http://nginx.org/en/docs/http/load_balancing.html. + +#### log-format-upstream + +Sets the nginx [log format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format). +Example for json output: + +```console +log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", + "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$request_id", "remote_user": + "$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status": + $status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", + "request_query": "$args", "request_length": $request_length, "duration": $request_time, + "method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": + "$http_user_agent" }' + ``` + +Please check [log-format](log-format.md) for definition of each field. + +#### log-format-stream + +Sets the nginx [stream format](https://nginx.org/en/docs/stream/ngx_stream_log_module.html#log_format). + +#### max-worker-connections + +Sets the maximum number of simultaneous connections that can be opened by each [worker process](http://nginx.org/en/docs/ngx_core_module.html#worker_connections) + +#### proxy-buffer-size + +Sets the size of the buffer used for [reading the first part of the response](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) received from the proxied server. This part usually contains a small response header. + +#### proxy-connect-timeout + +Sets the timeout for [establishing a connection with a proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds. + +#### proxy-cookie-domain + +Sets a text that [should be changed in the domain attribute](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain) of the “Set-Cookie” header fields of a proxied server response. + +#### proxy-cookie-path + +Sets a text that [should be changed in the path attribute](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path) of the “Set-Cookie” header fields of a proxied server response. + +#### proxy-read-timeout + +Sets the timeout in seconds for [reading a response from the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response. + +#### proxy-send-timeout + +Sets the timeout in seconds for [transmitting a request to the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request. + +#### proxy-next-upstream + +Specifies in [which cases](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream) a request should be passed to the next server. + +#### proxy-request-buffering + +Enables or disables [buffering of a client request body](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering). + +#### retry-non-idempotent + +Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. +The previous behavior can be restored using the value "true". + +#### server-name-hash-bucket-size + +Sets the size of the bucket for the server names hash tables. + +_References:_ + +- http://nginx.org/en/docs/hash.html +- http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size + +#### server-name-hash-max-size + +Sets the maximum size of the [server names hash tables](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_max_size) used in server names,map directive’s values, MIME types, names of request header strings, etc. + +_References:_ + +- http://nginx.org/en/docs/hash.html + +#### proxy-headers-hash-bucket-size + +Sets the size of the bucket for the proxy headers hash tables. + +_References:_ + +- http://nginx.org/en/docs/hash.html +- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size + +#### proxy-headers-hash-max-size + +Sets the maximum size of the proxy headers hash tables. + +_References:_ + +- http://nginx.org/en/docs/hash.html +- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size + +#### server-tokens + +Send NGINX Server header in responses and display NGINX version in error pages. +By default this is enabled. + +#### map-hash-bucket-size + +Sets the bucket size for the [map variables hash tables](http://nginx.org/en/docs/http/ngx_http_map_module.html#map_hash_bucket_size). +The details of setting up hash tables are provided in a separate [document](http://nginx.org/en/docs/hash.html). + +#### ssl-buffer-size + +Sets the size of the [SSL buffer](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data. +The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). +https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/ + +#### ssl-ciphers + +Sets the [ciphers](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers) list to enable. +The ciphers are specified in the format understood by the OpenSSL library. + +The default cipher list is: + `ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256`. + +The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. +The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server_Side_TLS#Forward_Secrecy). + +Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/). + +#### ssl-dh-param + +Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy". + +_References:_ + +- https://www.openssl.org/docs/manmaster/apps/dhparam.html +- https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam +- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam + +#### ssl-protocols + +Sets the [SSL protocols](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols) to use. +The default is: `TLSv1.2`. + +Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`. + +#### ssl-redirect + +Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). + +Default is "true". + +#### ssl-session-cache + +Enables or disables the use of shared [SSL cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) among worker processes. + +#### ssl-session-cache-size + +Sets the size of the [SSL shared session cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes. + +#### ssl-session-tickets + +Enables or disables session resumption through [TLS session tickets](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets). + +#### ssl-session-ticket-key + +Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. +http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets +By default, a randomly generated key is used. + +To create a ticket: `openssl rand 80 | base64 -w0` + +#### ssl-session-timeout + +Sets the time during which a client may [reuse the session](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache. + +#### upstream-max-fails + +Sets the number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that should happen in the duration set by the `fail_timeout` parameter to consider the server unavailable. + +#### upstream-fail-timeout + +Sets the time during which the specified number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) should happen to consider the server unavailable. + +#### use-gzip + +Enables or disables compression of HTTP responses using the ["gzip" module](http://nginx.org/en/docs/http/ngx_http_gzip_module.html). + +The default mime type list to compress is: `application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`. + +#### use-http2 + +Enables or disables [HTTP/2](http://nginx.org/en/docs/http/ngx_http_v2_module.html) support in secure connections. + +#### use-proxy-protocol + +Enables or disables the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB). + +#### whitelist-source-range + +Sets the default whitelisted IPs for each `server` block. +This can be overwritten by an annotation on an Ingress rule. +See [ngx_http_access_module](http://nginx.org/en/docs/http/ngx_http_access_module.html). + +#### worker-processes + +Sets the number of [worker processes](http://nginx.org/en/docs/ngx_core_module.html#worker_processes). +The default of "auto" means number of available CPU cores. + +#### worker-shutdown-timeout + +Sets a timeout for Nginx to [wait for worker to gracefully shutdown](http://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout). +The default is "10s". + +#### limit-conn-zone-variable + +Sets parameters for a shared memory zone that will keep states for various keys of [limit_conn_zone](http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn_zone). The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses. + +#### proxy-set-headers + +Sets custom headers from a configmap before sending traffic to backends. See [example](https://github.com/kubernetes/ingress-nginx/tree/master/deploy/examples/customization/custom-headers) + +#### add-headers + +Sets custom headers from a configmap before sending traffic to the client. See `proxy-set-headers` [example](https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers) + +#### bind-address + +Sets the addresses on which the server will accept requests instead of *. +It should be noted that these addresses must exist in the runtime environment or the controller will crash loop. + +#### enable-opentracing + +Enables the nginx Opentracing extension https://github.com/rnburn/nginx-opentracing +By default this is disabled + +#### zipkin-collector-host + +Specifies the host to use when uploading traces. It must be a valid URL + +#### zipkin-collector-port + +Specifies the port to use when uploading traces +Default: 9411 + +#### zipkin-service-name + +Specifies the service name to use for any traces created +Default: nginx + +#### http-snippet + +Adds custom configuration to the http section of the nginx configuration +Default: "" + +#### server-snippet + +Adds custom configuration to all the servers in the nginx configuration +Default: "" + +#### location-snippet + +Adds custom configuration to all the locations in the nginx configuration +Default: "" + + +### Default configuration options + +The following table shows the options, the default value and a description. + +|name | default | +|:--- |:-------| +|body-size|1m| +|custom-http-errors|" "| +|enable-dynamic-tls-records|"true"| +|enable-sticky-sessions|"false"| +|enable-underscores-in-headers|"false"| +|enable-vts-status|"false"| +|error-log-level|notice| +|gzip-types|see use-gzip description above| +|hsts|"true"| +|hsts-include-subdomains|"true"| +|hsts-max-age|"15724800"| +|hsts-preload|"false"| +|ignore-invalid-headers|"true"| +|keep-alive|"75"| +|log-format-stream|[$time_local] $protocol $status $bytes_sent $bytes_received $session_time| +|log-format-upstream|[$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status| +|map-hash-bucket-size|"64"| +|max-worker-connections|"16384"| +|proxy-body-size|same as body-size| +|proxy-buffer-size|"4k"| +|proxy-request-buffering|"on"| +|proxy-connect-timeout|"5"| +|proxy-cookie-domain|"off"| +|proxy-cookie-path|"off"| +|proxy-read-timeout|"60"| +|proxy-real-ip-cidr|0.0.0.0/0| +|proxy-send-timeout|"60"| +|retry-non-idempotent|"false"| +|server-name-hash-bucket-size|"64"| +|server-name-hash-max-size|"512"| +|server-tokens|"true"| +|ssl-buffer-size|4k| +|ssl-ciphers|| +|ssl-dh-param|value from openssl| +|ssl-protocols|TLSv1.2| +|ssl-session-cache|"true"| +|ssl-session-cache-size|10m| +|ssl-session-tickets|"true"| +|ssl-session-timeout|10m| +|use-gzip|"true"| +|use-http2|"true"| +|upstream-keepalive-connections|"0" (disabled)| +|variables-hash-bucket-size|64| +|variables-hash-max-size|2048| +|vts-status-zone-size|10m| +|vts-default-filter-key|$geoip_country_code country::*| +|whitelist-source-range|permit all| +|worker-processes|number of CPUs| +|limit-conn-zone-variable|$binary_remote_addr| +|bind-address|| diff --git a/docs/user-guide/custom-errors.md b/docs/user-guide/custom-errors.md new file mode 100644 index 0000000000..745d4f3bfb --- /dev/null +++ b/docs/user-guide/custom-errors.md @@ -0,0 +1,18 @@ +# Custom errors + +In case of an error in a request the body of the response is obtained from the `default backend`. +Each request to the default backend includes two headers: + +- `X-Code` indicates the HTTP code to be returned to the client. +- `X-Format` the value of the `Accept` header. + +**Important:** the custom backend must return the correct HTTP status code to be returned. NGINX do not changes the reponse from the custom default backend. + +Using this two headers is possible to use a custom backend service like [this one](https://github.com/kubernetes/ingress-nginx/tree/master/examples/customization/custom-errors/nginx) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/customization/custom-errors/README.md) + +NGINX sends aditional headers that can be used to build custom response: + +- X-Original-URI +- X-Namespace +- X-Ingress-Name +- X-Service-Name diff --git a/docs/user-guide/custom-template.md b/docs/user-guide/custom-template.md new file mode 100644 index 0000000000..fc8efecae6 --- /dev/null +++ b/docs/user-guide/custom-template.md @@ -0,0 +1,54 @@ +# Custom NGINX template + +The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. + +Using a [Volume](https://kubernetes.io/docs/concepts/storage/volumes/) it is possible to use a custom template. +This includes using a [Configmap](https://kubernetes.io/docs/concepts/storage/volumes/#example-pod-with-a-secret-a-downward-api-and-a-configmap) as source of the template + +```yaml + volumeMounts: + - mountPath: /etc/nginx/template + name: nginx-template-volume + readOnly: true + volumes: + - name: nginx-template-volume + configMap: + name: nginx-template + items: + - key: nginx.tmpl + path: nginx.tmpl +``` + +**Please note the template is tied to the Go code. Do not change names in the variable `$cfg`.** + +For more information about the template syntax please check the [Go template package](https://golang.org/pkg/text/template/). +In addition to the built-in functions provided by the Go package the following functions are also available: + +- empty: returns true if the specified parameter (string) is empty +- contains: [strings.Contains](https://golang.org/pkg/strings/#Contains) +- hasPrefix: [strings.HasPrefix](https://golang.org/pkg/strings/#HasPrefix) +- hasSuffix: [strings.HasSuffix](https://golang.org/pkg/strings/#HasSuffix) +- toUpper: [strings.ToUpper](https://golang.org/pkg/strings/#ToUpper) +- toLower: [strings.ToLower](https://golang.org/pkg/strings/#ToLower) +- buildLocation: helps to build the NGINX Location section in each server +- buildProxyPass: builds the reverse proxy configuration +- buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation + +TODO: + +- buildAuthLocation: +- buildAuthResponseHeaders: +- buildResolvers: +- buildLogFormatUpstream: +- buildDenyVariable: +- buildUpstreamName: +- buildForwardedFor: +- buildAuthSignURL: +- buildNextUpstream: +- filterRateLimits: +- formatIP: +- getenv: +- getIngressInformation: +- serverConfig: +- isLocationAllowed: +- isValidClientBodyBufferSize: diff --git a/docs/user-guide/default-ssl-certificate.md b/docs/user-guide/default-ssl-certificate.md new file mode 100644 index 0000000000..5d8c06d064 --- /dev/null +++ b/docs/user-guide/default-ssl-certificate.md @@ -0,0 +1,103 @@ +# Default SSL Certificate + +NGINX provides the option to configure a server as a cath-all with [server name _](http://nginx.org/en/docs/http/server_names.html) for requests that do not match any of the configured server names. This configuration works without issues for HTTP traffic. +In case of HTTPS, NGINX requires a certificate. +For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret behind this flag contains the default certificate to be used in the mentioned scenario. If this flag is not provided NGINX will use a self signed certificate. + +Running without the flag `--default-ssl-certificate`: + +```console +$ curl -v https://10.2.78.7:443 -k +* Rebuilt URL to: https://10.2.78.7:443/ +* Trying 10.2.78.4... +* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) +* ALPN, offering http/1.1 +* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH +* successfully set certificate verify locations: +* CAfile: /etc/ssl/certs/ca-certificates.crt + CApath: /etc/ssl/certs +* TLSv1.2 (OUT), TLS header, Certificate Status (22): +* TLSv1.2 (OUT), TLS handshake, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Server hello (2): +* TLSv1.2 (IN), TLS handshake, Certificate (11): +* TLSv1.2 (IN), TLS handshake, Server key exchange (12): +* TLSv1.2 (IN), TLS handshake, Server finished (14): +* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): +* TLSv1.2 (OUT), TLS change cipher, Client hello (1): +* TLSv1.2 (OUT), TLS handshake, Finished (20): +* TLSv1.2 (IN), TLS change cipher, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Finished (20): +* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 +* ALPN, server accepted to use http/1.1 +* Server certificate: +* subject: CN=foo.bar.com +* start date: Apr 13 00:50:56 2016 GMT +* expire date: Apr 13 00:50:56 2017 GMT +* issuer: CN=foo.bar.com +* SSL certificate verify result: self signed certificate (18), continuing anyway. +> GET / HTTP/1.1 +> Host: 10.2.78.7 +> User-Agent: curl/7.47.1 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.11.1 +< Date: Thu, 21 Jul 2016 15:38:46 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload +< +The page you're looking for could not be found. + +* Connection #0 to host 10.2.78.7 left intact +``` + +Specifying `--default-ssl-certificate=default/foo-tls`: + +```console +core@localhost ~ $ curl -v https://10.2.78.7:443 -k +* Rebuilt URL to: https://10.2.78.7:443/ +* Trying 10.2.78.7... +* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) +* ALPN, offering http/1.1 +* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH +* successfully set certificate verify locations: +* CAfile: /etc/ssl/certs/ca-certificates.crt + CApath: /etc/ssl/certs +* TLSv1.2 (OUT), TLS header, Certificate Status (22): +* TLSv1.2 (OUT), TLS handshake, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Server hello (2): +* TLSv1.2 (IN), TLS handshake, Certificate (11): +* TLSv1.2 (IN), TLS handshake, Server key exchange (12): +* TLSv1.2 (IN), TLS handshake, Server finished (14): +* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): +* TLSv1.2 (OUT), TLS change cipher, Client hello (1): +* TLSv1.2 (OUT), TLS handshake, Finished (20): +* TLSv1.2 (IN), TLS change cipher, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Finished (20): +* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 +* ALPN, server accepted to use http/1.1 +* Server certificate: +* subject: CN=foo.bar.com +* start date: Apr 13 00:50:56 2016 GMT +* expire date: Apr 13 00:50:56 2017 GMT +* issuer: CN=foo.bar.com +* SSL certificate verify result: self signed certificate (18), continuing anyway. +> GET / HTTP/1.1 +> Host: 10.2.78.7 +> User-Agent: curl/7.47.1 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.11.1 +< Date: Mon, 18 Jul 2016 21:02:59 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload +< +The page you're looking for could not be found. + +* Connection #0 to host 10.2.78.7 left intact +``` diff --git a/docs/user-guide/exposing-tcp-udp-services.md b/docs/user-guide/exposing-tcp-udp-services.md new file mode 100644 index 0000000000..95e0991180 --- /dev/null +++ b/docs/user-guide/exposing-tcp-udp-services.md @@ -0,0 +1,29 @@ +# Exposing TCP and UDP services + +Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags `--tcp-services-configmap` and `--udp-services-configmap` to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: +`::[PROXY]:[PROXY]` + +It is also possible to use a number or the name of the port. The two last fields are optional. +Adding `PROXY` in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service (https://www.nginx.com/resources/admin-guide/proxy-protocol/). + +The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000` + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: tcp-configmap-example +data: + 9000: "default/example-go:8080" +``` + +Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/). +The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53` + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: udp-configmap-example +data: + 53: "kube-system/kube-dns:53" diff --git a/docs/user-guide/external-articles.md b/docs/user-guide/external-articles.md new file mode 100644 index 0000000000..d9fd828b8a --- /dev/null +++ b/docs/user-guide/external-articles.md @@ -0,0 +1,6 @@ +# External Articles + +- [Pain(less) NGINX Ingress](http://danielfm.me/posts/painless-nginx-ingress.html) +- [Accessing Kubernetes Pods from Outside of the Cluster](http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster) +- [Kubernetes - Redirect HTTP to HTTPS with ELB and the nginx ingress controller](https://dev.to/tomhoule/kubernetes---redirect-http-to-https-with-elb-and-the-nginx-ingress-controller) +- [Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure](https://blogs.technet.microsoft.com/livedevopsinjapan/2017/02/28/configure-nginx-ingress-controller-for-tls-termination-on-kubernetes-on-azure-2/) diff --git a/docs/user-guide/ingress-annotations.md b/docs/user-guide/ingress-annotations.md new file mode 100644 index 0000000000..e69de29bb2 diff --git a/docs/user-guide/log-format.md b/docs/user-guide/log-format.md new file mode 100644 index 0000000000..553ca79975 --- /dev/null +++ b/docs/user-guide/log-format.md @@ -0,0 +1,34 @@ +# Log format + +The default configuration uses a custom logging format to add additional information about upstreams, response time and status + +``` + log_format upstreaminfo '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - ' + '[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' + '$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; +``` + +Sources: + +- [upstream variables](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables) +- [embedded variables](http://nginx.org/en/docs/http/ngx_http_core_module.html#variables) + +Description: + +- `$proxy_protocol_addr`: if PROXY protocol is enabled +- `$remote_addr`: if PROXY protocol is disabled (default) +- `$proxy_add_x_forwarded_for`: the `X-Forwarded-For` client request header field with the $remote_addr variable appended to it, separated by a comma +- `$remote_user`: user name supplied with the Basic authentication +- `$time_local`: local time in the Common Log Format +- `$request`: full original request line +- `$status`: response status +- `$body_bytes_sent`: number of bytes sent to a client, not counting the response header +- `$http_referer`: value of the Referer header +- `$http_user_agent`: value of User-Agent header +- `$request_length`: request length (including request line, header, and request body) +- `$request_time`: time elapsed since the first bytes were read from the client +- `$proxy_upstream_name`: name of the upstream. The format is `upstream---` +- `$upstream_addr`: keeps the IP address and port, or the path to the UNIX-domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas +- `$upstream_response_length`: keeps the length of the response obtained from the upstream server +- `$upstream_response_time`: keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution +- `$upstream_status`: keeps status code of the response obtained from the upstream server \ No newline at end of file diff --git a/docs/user-guide/modsecurity.md b/docs/user-guide/modsecurity.md new file mode 100644 index 0000000000..ddfb9fd4ea --- /dev/null +++ b/docs/user-guide/modsecurity.md @@ -0,0 +1,16 @@ +# ModSecurity Web Application Firewall + +ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org + +The [ModSecurity-nginx](https://github.com/SpiderLabs/ModSecurity-nginx) connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). + +The default modsecurity configuration file is located in `/etc/nginx/modsecurity/modsecurity.conf`. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. +To enable the modsecurity feature we need to specify `enable-modsecurity: "true"` in the configuration configmap. + +**NOTE:** the default configuration use detection only, because that minimises the chances of post-installation disruption. +The file `/var/log/modsec_audit.log` contains the log of modsecurity. + + +The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. +The directory `/etc/nginx/owasp-modsecurity-crs` contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository. +Using `enable-owasp-modsecurity-crs: "true"` we enable the use of the this rules. diff --git a/docs/user-guide/nginx-status-page.md b/docs/user-guide/nginx-status-page.md new file mode 100644 index 0000000000..8152c5eae7 --- /dev/null +++ b/docs/user-guide/nginx-status-page.md @@ -0,0 +1,11 @@ +# NGINX status page + +The [ngx_http_stub_status_module](http://nginx.org/en/docs/http/ngx_http_stub_status_module.html) module provides access to basic status information. +This is the default module active in the url `/nginx_status` in the status port (default is 18080). + +This controller provides an alternative to this module using the [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) module. +To use this module just set in the configuration configmap `enable-vts-status: "true"`. + +![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter") + +To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json` diff --git a/docs/user-guide/opentracing.md b/docs/user-guide/opentracing.md new file mode 100644 index 0000000000..bff2221f21 --- /dev/null +++ b/docs/user-guide/opentracing.md @@ -0,0 +1,39 @@ +# Opentracing + +Using the third party module [rnburn/nginx-opentracing](https://github.com/rnburn/nginx-opentracing) the NGINX ingress controller can configure NGINX to enable [OpenTracing](http://opentracing.io) instrumentation. +By default this feature is disabled. + +To enable the instrumentation we just need to enable the instrumentation in the configuration configmap and set the host where we should send the traces. + +In the [aledbf/zipkin-js-example](https://github.com/aledbf/zipkin-js-example) github repository is possible to see a dockerized version of zipkin-js-example with the required Kubernetes descriptors. +To install the example and the zipkin collector we just need to run: + +``` +kubectl create -f https://raw.githubusercontent.com/aledbf/zipkin-js-example/kubernetes/kubernetes/zipkin.yaml +kubectl create -f https://raw.githubusercontent.com/aledbf/zipkin-js-example/kubernetes/kubernetes/deployment.yaml +``` + +Also we need to configure the NGINX controller configmap with the required values: + +```yaml +apiVersion: v1 +data: + enable-opentracing: "true" + zipkin-collector-host: zipkin.default.svc.cluster.local +kind: ConfigMap +metadata: + labels: + k8s-app: nginx-ingress-controller + name: nginx-custom-configuration +``` + +Using curl we can generate some traces: + +```console +$ curl -v http://$(minikube ip)/api -H 'Host: zipkin-js-example' +$ curl -v http://$(minikube ip)/api -H 'Host: zipkin-js-example' +``` + +In the zipkin inteface we can see the details: + +![zipkin screenshot](../images/zipkin-demo.png "zipkin collector screenshot") diff --git a/docs/user-guide/tls.md b/docs/user-guide/tls.md new file mode 100644 index 0000000000..4d9b42a244 --- /dev/null +++ b/docs/user-guide/tls.md @@ -0,0 +1,154 @@ +# TLS + +- [Default SSL Certificate](#default-ssl-certificate) +- [SSL Passthrough](#ssl-passthrough) +- [HTTPS enforcement](#server-side-https-enforcement) +- [HSTS](#http-strict-transport-security) +- [Server-side HTTPS enforcement through redirect](#server-side-https-enforcement-through-redirect) +- [Kube-Lego](#automated-certificate-management-with-kube-lego) + +## Default SSL Certificate + +NGINX provides the option to configure a server as a cath-all with [server name _](http://nginx.org/en/docs/http/server_names.html) for requests that do not match any of the configured server names. This configuration works without issues for HTTP traffic. +In case of HTTPS, NGINX requires a certificate. +For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret behind this flag contains the default certificate to be used in the mentioned scenario. If this flag is not provided NGINX will use a self signed certificate. + +Running without the flag `--default-ssl-certificate`: + +```console +$ curl -v https://10.2.78.7:443 -k +* Rebuilt URL to: https://10.2.78.7:443/ +* Trying 10.2.78.4... +* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) +* ALPN, offering http/1.1 +* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH +* successfully set certificate verify locations: +* CAfile: /etc/ssl/certs/ca-certificates.crt + CApath: /etc/ssl/certs +* TLSv1.2 (OUT), TLS header, Certificate Status (22): +* TLSv1.2 (OUT), TLS handshake, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Server hello (2): +* TLSv1.2 (IN), TLS handshake, Certificate (11): +* TLSv1.2 (IN), TLS handshake, Server key exchange (12): +* TLSv1.2 (IN), TLS handshake, Server finished (14): +* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): +* TLSv1.2 (OUT), TLS change cipher, Client hello (1): +* TLSv1.2 (OUT), TLS handshake, Finished (20): +* TLSv1.2 (IN), TLS change cipher, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Finished (20): +* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 +* ALPN, server accepted to use http/1.1 +* Server certificate: +* subject: CN=foo.bar.com +* start date: Apr 13 00:50:56 2016 GMT +* expire date: Apr 13 00:50:56 2017 GMT +* issuer: CN=foo.bar.com +* SSL certificate verify result: self signed certificate (18), continuing anyway. +> GET / HTTP/1.1 +> Host: 10.2.78.7 +> User-Agent: curl/7.47.1 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.11.1 +< Date: Thu, 21 Jul 2016 15:38:46 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload +< +The page you're looking for could not be found. + +* Connection #0 to host 10.2.78.7 left intact +``` + +Specifying `--default-ssl-certificate=default/foo-tls`: + +```console +core@localhost ~ $ curl -v https://10.2.78.7:443 -k +* Rebuilt URL to: https://10.2.78.7:443/ +* Trying 10.2.78.7... +* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) +* ALPN, offering http/1.1 +* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH +* successfully set certificate verify locations: +* CAfile: /etc/ssl/certs/ca-certificates.crt + CApath: /etc/ssl/certs +* TLSv1.2 (OUT), TLS header, Certificate Status (22): +* TLSv1.2 (OUT), TLS handshake, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Server hello (2): +* TLSv1.2 (IN), TLS handshake, Certificate (11): +* TLSv1.2 (IN), TLS handshake, Server key exchange (12): +* TLSv1.2 (IN), TLS handshake, Server finished (14): +* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): +* TLSv1.2 (OUT), TLS change cipher, Client hello (1): +* TLSv1.2 (OUT), TLS handshake, Finished (20): +* TLSv1.2 (IN), TLS change cipher, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Finished (20): +* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 +* ALPN, server accepted to use http/1.1 +* Server certificate: +* subject: CN=foo.bar.com +* start date: Apr 13 00:50:56 2016 GMT +* expire date: Apr 13 00:50:56 2017 GMT +* issuer: CN=foo.bar.com +* SSL certificate verify result: self signed certificate (18), continuing anyway. +> GET / HTTP/1.1 +> Host: 10.2.78.7 +> User-Agent: curl/7.47.1 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.11.1 +< Date: Mon, 18 Jul 2016 21:02:59 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload +< +The page you're looking for could not be found. + +* Connection #0 to host 10.2.78.7 left intact +``` + +## SSL Passthrough + +The flag `--enable-ssl-passthrough` enables SSL passthrough feature. +By default this feature is disabled + +## Server-side HTTPS enforcement + +By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the configuration ConfigMap. + +To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. + +## HTTP Strict Transport Security + +HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. + +By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule. + +To disable this behavior use `hsts: "false"` in the configuration ConfigMap. + +### Server-side HTTPS enforcement through redirect + +By default the controller redirects (301) to `HTTPS` if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use `ssl-redirect: "false"` in the NGINX config map. + +To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. + +When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to `HTTPS` even when there is not TLS cert available. This can be achieved by using the `ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource. + +## Automated Certificate Management with Kube-Lego + +[Kube-Lego] automatically requests missing or expired certificates from [Let's Encrypt] by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation: + +```console +kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true" +``` + +To setup Kube-Lego you can take a look at this [full example]. The first +version to fully support Kube-Lego is nginx Ingress controller 0.8. + +[full example]:https://github.com/jetstack/kube-lego/tree/master/examples +[Kube-Lego]:https://github.com/jetstack/kube-lego +[Let's Encrypt]:https://letsencrypt.org diff --git a/examples/aws/README.md b/examples/aws/README.md deleted file mode 100644 index 3245093ea5..0000000000 --- a/examples/aws/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# NGINX Ingress running in AWS - -This example shows how is possible to use the nginx ingress controller in AWS behind an ELB configured with Proxy Protocol. - -```console -kubectl create -f ./nginx-ingress-controller.yaml -``` - -This command creates: -- a default backend deployment and service. -- a service with `type: LoadBalancer` configuring Proxy Protocol in the ELB (`service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'`). -- a configmap for the ingress controller enabling proxy protocol in NGINX (`use-proxy-protocol: "true"`) -- a deployment for the ingress controller - -Is the proxy protocol necessary? - -No but only enabling the protocol is possible to keep the real source IP address requesting the connection. - -### References - -- http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-proxy-protocol.html -- https://www.nginx.com/resources/admin-guide/proxy-protocol/ diff --git a/examples/aws/nginx-ingress-controller.yaml b/examples/aws/nginx-ingress-controller.yaml deleted file mode 100644 index 9d2b80d298..0000000000 --- a/examples/aws/nginx-ingress-controller.yaml +++ /dev/null @@ -1,134 +0,0 @@ -kind: Service -apiVersion: v1 -metadata: - name: nginx-default-backend - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - ports: - - port: 80 - targetPort: http - selector: - app: nginx-default-backend - ---- - -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: nginx-default-backend - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - replicas: 1 - template: - metadata: - labels: - k8s-addon: ingress-nginx.addons.k8s.io - app: nginx-default-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - image: gcr.io/google_containers/defaultbackend:1.0 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi - ports: - - name: http - containerPort: 8080 - protocol: TCP - ---- - -kind: ConfigMap -apiVersion: v1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io -data: - use-proxy-protocol: "true" - ---- - -kind: Service -apiVersion: v1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io - annotations: - service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*' -spec: - type: LoadBalancer - selector: - app: ingress-nginx - ports: - - name: http - port: 80 - targetPort: http - - name: https - port: 443 - targetPort: https - ---- - -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - replicas: 1 - template: - metadata: - labels: - app: ingress-nginx - k8s-addon: ingress-nginx.addons.k8s.io - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: ingress-nginx - imagePullPolicy: Always - ports: - - name: http - containerPort: 80 - protocol: TCP - - name: https - containerPort: 443 - protocol: TCP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend - - --configmap=$(POD_NAMESPACE)/ingress-nginx - - --publish-service=$(POD_NAMESPACE)/ingress-nginx diff --git a/examples/customization/configuration-snippets/README.md b/examples/customization/configuration-snippets/README.md deleted file mode 100644 index 798e3e17ec..0000000000 --- a/examples/customization/configuration-snippets/README.md +++ /dev/null @@ -1,48 +0,0 @@ -# Deploying the Nginx Ingress controller - -This example aims to demonstrate the deployment of an nginx ingress controller and -with the use of an annotation in the Ingress rule be able to customize the nginx -configuration. - -## Default Backend - -The default backend is a Service capable of handling all url paths and hosts the -nginx controller doesn't understand. This most basic implementation just returns -a 404 page: - -```console -$ kubectl apply -f default-backend.yaml -deployment "default-http-backend" created -service "default-http-backend" created - -$ kubectl -n kube-system get po -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-qgwdd 1/1 Running 0 28s -``` - -## Controller - -You can deploy the controller as follows: - -```console -$ kubectl apply -f nginx-ingress-controller.yaml -deployment "nginx-ingress-controller" created - -$ kubectl -n kube-system get po -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-qgwdd 1/1 Running 0 2m -nginx-ingress-controller-873061567-4n3k2 1/1 Running 0 42s -``` - -## Ingress -The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at [this example](/examples/customization/custom-headers/nginx). - -```console -$ kubectl apply -f ingress.yaml -deployment "nginx-ingress-controller" created -``` - -## Test - -Check if the contents of the annotation are present in the nginx.conf file using: -`kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf` diff --git a/examples/customization/custom-template/README.md b/examples/customization/custom-template/README.md deleted file mode 100644 index 259ca80a07..0000000000 --- a/examples/customization/custom-template/README.md +++ /dev/null @@ -1,8 +0,0 @@ -This example shows how it is possible to use a custom template - -First create a configmap with a template inside running: -``` -kubectl create configmap nginx-template --from-file=nginx.tmpl=../../nginx.tmpl -``` - -Next create the rc `kubectl create -f custom-template.yaml` diff --git a/examples/customization/custom-template/custom-template.yaml b/examples/customization/custom-template/custom-template.yaml deleted file mode 100644 index 6f07d1a5c2..0000000000 --- a/examples/customization/custom-template/custom-template.yaml +++ /dev/null @@ -1,62 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: nginx-ingress-controller - labels: - k8s-app: nginx-ingress-lb -spec: - replicas: 1 - selector: - k8s-app: nginx-ingress-lb - template: - metadata: - labels: - k8s-app: nginx-ingress-lb - name: nginx-ingress-lb - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-ingress-lb - imagePullPolicy: Always - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - # use downward API - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - volumeMounts: - - mountPath: /etc/nginx/template - name: nginx-template-volume - readOnly: true - volumes: - - name: nginx-template-volume - configMap: - name: nginx-template - items: - - key: nginx.tmpl - path: nginx.tmpl diff --git a/examples/customization/ssl-dh-param/default-backend.yaml b/examples/customization/ssl-dh-param/default-backend.yaml deleted file mode 100644 index 3c40989a31..0000000000 --- a/examples/customization/ssl-dh-param/default-backend.yaml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: default-http-backend - labels: - k8s-app: default-http-backend - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: default-http-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - # Any image is permissable as long as: - # 1. It serves a 404 page at / - # 2. It serves 200 on a /healthz endpoint - image: gcr.io/google_containers/defaultbackend:1.0 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - ports: - - containerPort: 8080 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi ---- -apiVersion: v1 -kind: Service -metadata: - name: default-http-backend - namespace: kube-system - labels: - k8s-app: default-http-backend -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - k8s-app: default-http-backend diff --git a/examples/customization/ssl-dh-param/nginx-ingress-controller.yaml b/examples/customization/ssl-dh-param/nginx-ingress-controller.yaml deleted file mode 100644 index 10a01dc935..0000000000 --- a/examples/customization/ssl-dh-param/nginx-ingress-controller.yaml +++ /dev/null @@ -1,53 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: nginx-ingress-controller - labels: - k8s-app: nginx-ingress-controller - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: nginx-ingress-controller - spec: - # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration - # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host - # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used - # like with kubeadm - # hostNetwork: true - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-ingress-controller - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - - --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf diff --git a/examples/daemonset/README.md b/examples/daemonset/README.md deleted file mode 100644 index b1f944d800..0000000000 --- a/examples/daemonset/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# Nginx Ingress DaemonSet - -In some cases, the Ingress controller will be required to be run at all the nodes in cluster. Using [DaemonSet](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/apps/daemon.md) can achieve this requirement. - -## Default Backend - -The default backend is a service of handling all url paths and hosts the nginx controller doesn't understand. Deploy the default-http-backend as follow: - -```console -$ kubectl apply -f ../../deployment/nginx/default-backend.yaml -deployment "default-http-backend" configured -service "default-http-backend" configured - -$ kubectl -n kube-system get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -default-http-backend 192.168.3.6 80/TCP 1h - -$ kubectl -n kube-system get po -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-6b47n 1/1 Running 0 1h -``` - -## Ingress DaemonSet - -Deploy the daemonset as follows: - -```console -$ kubectl apply -f nginx-ingress-daemonset.yaml -daemonset "nginx-ingress-lb" created - -$ kubectl -n kube-system get ds -NAME DESIRED CURRENT READY NODE-SELECTOR AGE -nginx-ingress-lb 2 2 2 21s - -$ kubectl -n kube-system get po -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-6b47n 1/1 Running 0 2h -nginx-ingress-lb-8381i 1/1 Running 0 56s -nginx-ingress-lb-h54gf 1/1 Running 0 56s -``` diff --git a/examples/daemonset/nginx-ingress-daemonset.yaml b/examples/daemonset/nginx-ingress-daemonset.yaml deleted file mode 100644 index 7fdae729c9..0000000000 --- a/examples/daemonset/nginx-ingress-daemonset.yaml +++ /dev/null @@ -1,50 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: DaemonSet -metadata: - name: nginx-ingress-lb - labels: - name: nginx-ingress-lb - namespace: kube-system -spec: - template: - metadata: - labels: - name: nginx-ingress-lb - annotations: - prometheus.io/port: '10254' - prometheus.io/scrape: 'true' - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-ingress-lb - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - diff --git a/examples/default-backend.yaml b/examples/default-backend.yaml deleted file mode 100644 index 3c40989a31..0000000000 --- a/examples/default-backend.yaml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: default-http-backend - labels: - k8s-app: default-http-backend - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: default-http-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - # Any image is permissable as long as: - # 1. It serves a 404 page at / - # 2. It serves 200 on a /healthz endpoint - image: gcr.io/google_containers/defaultbackend:1.0 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - ports: - - containerPort: 8080 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi ---- -apiVersion: v1 -kind: Service -metadata: - name: default-http-backend - namespace: kube-system - labels: - k8s-app: default-http-backend -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - k8s-app: default-http-backend diff --git a/examples/deployment/README.md b/examples/deployment/README.md deleted file mode 100644 index 2eff41f2b0..0000000000 --- a/examples/deployment/README.md +++ /dev/null @@ -1,57 +0,0 @@ -# Deploying the Nginx Ingress controller - -This example aims to demonstrate the deployment of an nginx ingress controller. - -## Default Backend - -The default backend is a Service capable of handling all url paths and hosts the -nginx controller doesn't understand. This most basic implementation just returns -a 404 page: - -```console -$ kubectl apply -f default-backend.yaml -deployment "default-http-backend" created -service "default-http-backend" created - -$ kubectl -n kube-system get pods -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-qgwdd 1/1 Running 0 28s -``` - -## Controller - -You can deploy the controller as follows: - -1. Disable the ingress addon: -```console -$ minikube addons disable ingress -``` -2. Use the [docker daemon](https://github.com/kubernetes/minikube/blob/master/docs/reusing_the_docker_daemon.md) -3. [Build the image](../../../docs/dev/getting-started.md) -4. Change [nginx-ingress-controller.yaml](nginx-ingress-controller.yaml) to use the appropriate image. Local images can be -seen by performing `docker images`. -```yaml -image: : -``` -5. Create the nginx-ingress-controller deployment: -```console -$ kubectl apply -f nginx-ingress-controller.yaml -deployment "nginx-ingress-controller" created - -$ kubectl -n kube-system get pods -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-qgwdd 1/1 Running 0 2m -nginx-ingress-controller-873061567-4n3k2 1/1 Running 0 42s -``` - -Note the default settings of this controller: -* serves a `/healthz` url on port 10254, as a status probe -* takes a `--default-backend-service` argument pointing to the Service created above - -## Running on a cloud provider - -If you're running this ingress controller on a cloud-provider, you should assume -the provider also has a native Ingress controller and set the annotation -`kubernetes.io/ingress.class: nginx` in all Ingresses meant for this controller. -You might also need to open a firewall-rule for ports 80/443 of the nodes the -controller is running on. diff --git a/examples/deployment/default-backend.yaml b/examples/deployment/default-backend.yaml deleted file mode 100644 index 3c40989a31..0000000000 --- a/examples/deployment/default-backend.yaml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: default-http-backend - labels: - k8s-app: default-http-backend - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: default-http-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - # Any image is permissable as long as: - # 1. It serves a 404 page at / - # 2. It serves 200 on a /healthz endpoint - image: gcr.io/google_containers/defaultbackend:1.0 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - ports: - - containerPort: 8080 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi ---- -apiVersion: v1 -kind: Service -metadata: - name: default-http-backend - namespace: kube-system - labels: - k8s-app: default-http-backend -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - k8s-app: default-http-backend diff --git a/examples/deployment/kubeadm/README.md b/examples/deployment/kubeadm/README.md deleted file mode 100644 index 3cb0f0b85c..0000000000 --- a/examples/deployment/kubeadm/README.md +++ /dev/null @@ -1,38 +0,0 @@ -# Deploying the Nginx Ingress controller on kubeadm clusters - -This example aims to demonstrate the deployment of an nginx ingress controller with kubeadm, -and is nearly the same as the example above, but here the Ingress Controller is using -`hostNetwork: true` until the CNI kubelet networking plugin is compatible with `hostPort` -(see issue: [kubernetes/kubernetes#31307](https://github.com/kubernetes/kubernetes/issues/31307)) - -## Default Backend - -The default backend is a Service capable of handling all url paths and hosts the -nginx controller doesn't understand. This most basic implementation just returns -a 404 page. - -## Controller - -The Nginx Ingress Controller uses nginx (surprisingly!) to loadbalance requests that are coming to -ports 80 and 443 to Services in the cluster. - -```console -$ kubectl apply -f https://rawgit.com/kubernetes/ingress/master/examples/deployment/nginx/kubeadm/nginx-ingress-controller.yaml -deployment "default-http-backend" created -service "default-http-backend" created -deployment "nginx-ingress-controller" created -``` - -Note the default settings of this controller: -* serves a `/healthz` url on port 10254, as both a liveness and readiness probe -* automatically deploys the `gcr.io/google_containers/defaultbackend:1.0` image for serving 404 requests. - -At its current state, it only supports running on `amd64` nodes. - -## Running on a cloud provider - -If you're running this ingress controller on a cloudprovider, you should assume -the provider also has a native Ingress controller and set the annotation -`kubernetes.io/ingress.class: nginx` in all Ingresses meant for this controller. -You might also need to open a firewall-rule for ports 80/443 of the nodes the -controller is running on. diff --git a/examples/deployment/kubeadm/nginx-ingress-controller.yaml b/examples/deployment/kubeadm/nginx-ingress-controller.yaml deleted file mode 100644 index 04e8d1f45d..0000000000 --- a/examples/deployment/kubeadm/nginx-ingress-controller.yaml +++ /dev/null @@ -1,104 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: default-http-backend - labels: - k8s-app: default-http-backend - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: default-http-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - # Any image is permissable as long as: - # 1. It serves a 404 page at / - # 2. It serves 200 on a /healthz endpoint - image: gcr.io/google_containers/defaultbackend:1.0 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - ports: - - containerPort: 8080 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi ---- -apiVersion: v1 -kind: Service -metadata: - name: default-http-backend - namespace: kube-system - labels: - k8s-app: default-http-backend -spec: - ports: - - port: 80 - targetPort: 8080 - selector: - k8s-app: default-http-backend ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: nginx-ingress-controller - labels: - k8s-app: nginx-ingress-controller - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: nginx-ingress-controller - spec: - # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration - # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host - # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used - # like with kubeadm - hostNetwork: true - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-ingress-controller - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend diff --git a/examples/deployment/nginx-ingress-controller.yaml b/examples/deployment/nginx-ingress-controller.yaml deleted file mode 100644 index f21d4c95bc..0000000000 --- a/examples/deployment/nginx-ingress-controller.yaml +++ /dev/null @@ -1,55 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: nginx-ingress-controller - labels: - k8s-app: nginx-ingress-controller - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: nginx-ingress-controller - annotations: - prometheus.io/port: '10254' - prometheus.io/scrape: 'true' - spec: - # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration - # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host - # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used - # like with kubeadm - # hostNetwork: true - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-ingress-controller - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend diff --git a/examples/http-svc.yaml b/examples/http-svc.yaml deleted file mode 100644 index ff25ab0048..0000000000 --- a/examples/http-svc.yaml +++ /dev/null @@ -1,51 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: http-svc - labels: - app: http-svc -spec: - type: NodePort - ports: - - port: 80 - # This port needs to be available on all nodes in the cluster - nodePort: 30301 - targetPort: 8080 - protocol: TCP - name: http - selector: - app: http-svc ---- -apiVersion: v1 -kind: ReplicationController -metadata: - name: http-svc -spec: - replicas: 1 - template: - metadata: - labels: - app: http-svc - spec: - containers: - - name: http-svc - image: gcr.io/google_containers/echoserver:1.8 - ports: - - containerPort: 8080 - env: - - name: NODE_NAME - valueFrom: - fieldRef: - fieldPath: spec.nodeName - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - - name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP diff --git a/examples/ingress.yaml b/examples/ingress.yaml deleted file mode 100644 index 8a78b85b96..0000000000 --- a/examples/ingress.yaml +++ /dev/null @@ -1,26 +0,0 @@ -# This is the Ingress resource that creates a HTTP Loadbalancer configured -# according to the Ingress rules. -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: echomap -spec: - rules: - - host: foo.bar.com - http: - paths: - - path: /foo - backend: - serviceName: echoheaders-x - servicePort: 80 - - host: bar.baz.com - http: - paths: - - path: /bar - backend: - serviceName: echoheaders-y - servicePort: 80 - - path: /foo - backend: - serviceName: echoheaders-x - servicePort: 80 \ No newline at end of file diff --git a/examples/rbac/nginx-ingress-controller.yml b/examples/rbac/nginx-ingress-controller.yml deleted file mode 100644 index 19232d3e73..0000000000 --- a/examples/rbac/nginx-ingress-controller.yml +++ /dev/null @@ -1,37 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: nginx-ingress-controller - namespace: nginx-ingress -spec: - replicas: 2 - selector: - matchLabels: - k8s-app: nginx-ingress-lb - template: - metadata: - labels: - k8s-app: nginx-ingress-lb - spec: - serviceAccountName: nginx-ingress-serviceaccount - containers: - - name: nginx-ingress-controller - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - args: - - /nginx-ingress-controller - - --default-backend-service=default/default-http-backend - - --default-ssl-certificate=$(POD_NAMESPACE)/tls-certificate - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - ports: - - name: http - containerPort: 80 - - name: https - containerPort: 443 diff --git a/examples/scaling-deployment/README.md b/examples/scaling-deployment/README.md deleted file mode 100644 index e381740159..0000000000 --- a/examples/scaling-deployment/README.md +++ /dev/null @@ -1,41 +0,0 @@ -# Deploying multi Nginx Ingress Controllers - -This example aims to demonstrate the Deployment of multi nginx ingress controllers. - -## Default Backend - -The default backend is a service of handling all url paths and hosts the nginx controller doesn't understand. Deploy the default-http-backend as follow: - -```console -$ kubectl apply -f ../../deployment/nginx/default-backend.yaml -deployment "default-http-backend" configured -service "default-http-backend" configured - -$ kubectl -n kube-system get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -default-http-backend 192.168.3.52 80/TCP 6m - -$ kubectl -n kube-system get po -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-wz6o3 1/1 Running 0 6m -``` - -## Ingress Deployment - -Deploy the Deployment of multi controllers as follows: - -```console -$ kubectl apply -f nginx-ingress-deployment.yaml -deployment "nginx-ingress-controller" created - -$ kubectl -n kube-system get deployment -NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE -default-http-backend 1 1 1 1 16m -nginx-ingress-controller 2 2 2 2 24s - -$ kubectl -n kube-system get po -NAME READY STATUS RESTARTS AGE -default-http-backend-2657704409-wz6o3 1/1 Running 0 16m -nginx-ingress-controller-3752011415-0qbi6 1/1 Running 0 39s -nginx-ingress-controller-3752011415-vi8fq 1/1 Running 0 39s -``` diff --git a/examples/scaling-deployment/nginx-ingress-deployment.yaml b/examples/scaling-deployment/nginx-ingress-deployment.yaml deleted file mode 100644 index 8c0fe14ae3..0000000000 --- a/examples/scaling-deployment/nginx-ingress-deployment.yaml +++ /dev/null @@ -1,47 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: nginx-ingress-controller - labels: - k8s-app: nginx-ingress-controller - namespace: kube-system -spec: - replicas: 2 - template: - metadata: - labels: - k8s-app: nginx-ingress-controller - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-ingress-controller - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend diff --git a/examples/tcp/README.md b/examples/tcp/README.md deleted file mode 100644 index abe853386c..0000000000 --- a/examples/tcp/README.md +++ /dev/null @@ -1,128 +0,0 @@ -# TCP loadbalancing - -This example shows how to implement TCP loadbalancing through the Nginx Controller - -## Prerequisites - -You need a [Default Backend service](/examples/deployment/nginx/README.md#default-backend) and a [test HTTP service](/examples/PREREQUISITES.md#test-http-service) for this example - -## Config TCP Service - -To configure which services and ports will be exposed: -``` -$ kubectl create -f nginx-tcp-ingress-configmap.yaml -configmap "nginx-tcp-ingress-configmap" created - -$ kubectl -n kube-system get configmap -NAME DATA AGE -nginx-tcp-ingress-configmap 1 10m - -$ kubectl -n kube-system describe configmap nginx-tcp-ingress-configmap -Name: nginx-tcp-ingress-configmap -Namespace: kube-system -Labels: -Annotations: - -Data -==== -9000: ----- -default/http-svc:80 -``` - -The file `nginx-tcp-ingress-configmap.yaml` uses a ConfigMap where the key is the external port to use and the value is -`:` - -It is possible to use a number or the name of the port - -## Deploy -``` -$ kubectl create -f nginx-tcp-ingress-controller.yaml -replicationcontroller "nginx-ingress-controller" created - -$ kubectl -n kube-system get rc -NAME DESIRED CURRENT READY AGE -nginx-ingress-controller 1 1 1 3m - -$ kubectl -n kube-system describe rc nginx-ingress-controller -Name: nginx-ingress-controller -Namespace: kube-system -Image(s): gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 -Selector: k8s-app=nginx-tcp-ingress-lb -Labels: k8s-app=nginx-ingress-lb -Annotations: -Replicas: 1 current / 1 desired -Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed -No volumes. -Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 1m 1m 1 replication-controller Normal SuccessfulCreate Created pod: nginx-ingress-controller-mv92m - -$ kubectl -n kube-system get po -o wide -NAME READY STATUS RESTARTS AGE IP -default-http-backend-2198840601-fxxjg 1/1 Running 0 2h 172.16.22.4 10.114.51.137 -nginx-ingress-controller-mv92m 1/1 Running 0 2m 172.16.63.6 10.114.51.207 -``` - -## Test -``` -$ (sleep 1; echo "GET / HTTP/1.1"; echo "Host: 172.16.63.6:9000"; echo;echo;sleep 2) | telnet 172.16.63.6 9000 -Trying 172.16.63.6... -Connected to 172.16.63.6. -Escape character is '^]'. -HTTP/1.1 200 OK -Server: nginx/1.9.11 -Date: Thu, 20 Apr 2017 07:53:30 GMT -Content-Type: text/plain -Transfer-Encoding: chunked -Connection: keep-alive - -f -CLIENT VALUES: - -1b -client_address=172.16.63.6 - -c -command=GET - -c -real path=/ - -a -query=nil - -14 -request_version=1.1 - -25 -request_uri=http://172.16.63.6:8080/ - -1 - - -f -SERVER VALUES: - -2a -server_version=nginx: 1.9.11 - lua: 10001 - -1 - - -12 -HEADERS RECEIVED: - -16 -host=172.16.63.6:9000 - -6 -BODY: - -14 --no body in request- -0 - -Connection closed by foreign host. -``` diff --git a/examples/tcp/nginx-tcp-ingress-configmap.yaml b/examples/tcp/nginx-tcp-ingress-configmap.yaml deleted file mode 100644 index 84aeb2bdfa..0000000000 --- a/examples/tcp/nginx-tcp-ingress-configmap.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: nginx-tcp-ingress-configmap - namespace: kube-system -data: - 9000: "default/http-svc:80" diff --git a/examples/tcp/nginx-tcp-ingress-controller.yaml b/examples/tcp/nginx-tcp-ingress-controller.yaml deleted file mode 100644 index 134781c18b..0000000000 --- a/examples/tcp/nginx-tcp-ingress-controller.yaml +++ /dev/null @@ -1,53 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: nginx-ingress-controller - labels: - k8s-app: nginx-ingress-lb - namespace: kube-system -spec: - replicas: 1 - selector: - k8s-app: nginx-tcp-ingress-lb - template: - metadata: - labels: - k8s-app: nginx-tcp-ingress-lb - name: nginx-tcp-ingress-lb - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-tcp-ingress-lb - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - - containerPort: 9000 - hostPort: 9000 - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - - --tcp-services-configmap=$(POD_NAMESPACE)/nginx-tcp-ingress-configmap diff --git a/examples/tls-termination/elb-nginx/README.md b/examples/tls-termination/elb-nginx/README.md deleted file mode 100644 index 9fc110b190..0000000000 --- a/examples/tls-termination/elb-nginx/README.md +++ /dev/null @@ -1,15 +0,0 @@ - -### Elastic Load Balancer for TLS termination - -This example shows the required steps to use classic Elastic Load Balancer for TLS termination. - -Change line of the file `elb-tls-nginx-ingress-controller.yaml` replacing the dummy id with a valid one `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"` - -Then execute: -``` -$ kubectl create -f elb-tls-nginx-ingress-controller.yaml -``` - -This example creates an ELB with just two listeners, one in port 80 and another in port 443 - -![Listeners](images/listener.png) diff --git a/examples/tls-termination/elb-nginx/nginx-ingress-controller.yaml b/examples/tls-termination/elb-nginx/nginx-ingress-controller.yaml deleted file mode 100644 index c2d9a0ff5b..0000000000 --- a/examples/tls-termination/elb-nginx/nginx-ingress-controller.yaml +++ /dev/null @@ -1,135 +0,0 @@ -kind: Service -apiVersion: v1 -metadata: - name: nginx-default-backend - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - ports: - - port: 80 - targetPort: http - selector: - app: nginx-default-backend - ---- - -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: nginx-default-backend - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - replicas: 1 - template: - metadata: - labels: - k8s-addon: ingress-nginx.addons.k8s.io - app: nginx-default-backend - spec: - terminationGracePeriodSeconds: 60 - containers: - - name: default-http-backend - image: gcr.io/google_containers/defaultbackend:1.0 - livenessProbe: - httpGet: - path: /healthz - port: 8080 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - resources: - limits: - cpu: 10m - memory: 20Mi - requests: - cpu: 10m - memory: 20Mi - ports: - - name: http - containerPort: 8080 - protocol: TCP - ---- - -kind: ConfigMap -apiVersion: v1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io - ---- - -kind: Service -apiVersion: v1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io - annotations: - # replace with the correct value of the generated certifcate in the AWS console - service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX" - # the backend instances are HTTP - service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" - # Map port 443 - service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https" - -spec: - type: LoadBalancer - selector: - app: ingress-nginx - ports: - - name: http - port: 80 - targetPort: http - - name: https - port: 443 - targetPort: http - ---- - -kind: Deployment -apiVersion: extensions/v1beta1 -metadata: - name: ingress-nginx - labels: - k8s-addon: ingress-nginx.addons.k8s.io -spec: - replicas: 1 - template: - metadata: - labels: - app: ingress-nginx - k8s-addon: ingress-nginx.addons.k8s.io - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: ingress-nginx - imagePullPolicy: Always - ports: - - name: http - containerPort: 80 - protocol: TCP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 30 - timeoutSeconds: 5 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend - - --configmap=$(POD_NAMESPACE)/ingress-nginx - - --publish-service=$(POD_NAMESPACE)/ingress-nginx diff --git a/examples/udp/README.md b/examples/udp/README.md deleted file mode 100644 index ed043f0b96..0000000000 --- a/examples/udp/README.md +++ /dev/null @@ -1,80 +0,0 @@ -# UDP loadbalancing - -This example shows how to implement UDP loadbalancing through the Nginx Controller - -## Prerequisites - -You need a [Default Backend service](/examples/deployment/nginx/README.md#default-backend) and a [kube-dns service](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns#kube-dns) for this example -``` -$ kubectl -n kube-system get svc -NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -kube-system default-http-backend 192.168.3.204 80/TCP 1d -kube-system kube-dns 192.168.3.10 53/UDP,53/TCP 23h -``` - -## Config UDP Service - -To configure which services and ports will be exposed: -``` -$ kubectl create -f nginx-udp-ingress-configmap.yaml -configmap "nginx-udp-ingress-configmap" created - -$ kubectl -n kube-system get configmap -NAME DATA AGE -extension-apiserver-authentication 1 1d -kube-dns 0 1d -nginx-udp-ingress-configmap 1 15m - -$ kubectl -n kube-system describe configmap nginx-udp-ingress-configmap -Name: nginx-udp-ingress-configmap -Namespace: kube-system -Labels: -Annotations: - -Data -==== -9001: ----- -kube-system/kube-dns:53 -``` - -The file `nginx-udp-ingress-configmap.yaml` uses a ConfigMap where the key is the external port to use and the value is -`:` - -## Deploy -``` -$ kubectl create -f nginx-udp-ingress-controller.yaml -replicationcontroller "nginx-udp-ingress-controller" created - -$ kubectl -n kube-system get rc -NAME DESIRED CURRENT READY AGE -nginx-udp-ingress-controller 1 1 1 13m - -$ kubectl -n kube-system describe rc nginx-udp-ingress-controller -Name: nginx-udp-ingress-controller -Namespace: kube-system -Image(s): gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 -Selector: k8s-app=nginx-udp-ingress-lb -Labels: k8s-app=nginx-udp-ingress-lb -Annotations: -Replicas: 1 current / 1 desired -Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed -No volumes. -Events: - FirstSeen LastSeen Count From SubObjectPath Type Reason Message - --------- -------- ----- ---- ------------- -------- ------ ------- - 46s 46s 1 replication-controller Normal SuccessfulCreate Created pod: nginx-udp-ingress-controller-m0pjl - -$ kubectl -n kube-system get po -o wide -NAME READY STATUS RESTARTS AGE IP -NAME READY STATUS RESTARTS AGE IP NODE -default-http-backend-2198840601-5j1zc 1/1 Running 0 1d 172.16.45.3 10.114.51.28 -kube-dns-1874783228-nvs9f 3/3 Running 0 23h 172.16.10.3 10.114.51.217 -nginx-udp-ingress-controller-m0pjl 1/1 Running 0 1m 172.16.10.2 10.114.51.217 -``` - -## Test -``` -$ nc -uzv 172.16.10.2 9001 -Connection to 172.16.10.2 9001 port [udp/*] succeeded! -``` diff --git a/examples/udp/nginx-udp-ingress-configmap.yaml b/examples/udp/nginx-udp-ingress-configmap.yaml deleted file mode 100644 index 640c64149b..0000000000 --- a/examples/udp/nginx-udp-ingress-configmap.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: v1 -kind: ConfigMap -metadata: - name: nginx-udp-ingress-configmap - namespace: kube-system -data: - 9001: "kube-system/kube-dns:53" diff --git a/examples/udp/nginx-udp-ingress-controller.yaml b/examples/udp/nginx-udp-ingress-controller.yaml deleted file mode 100644 index 16562d2748..0000000000 --- a/examples/udp/nginx-udp-ingress-controller.yaml +++ /dev/null @@ -1,54 +0,0 @@ -apiVersion: v1 -kind: ReplicationController -metadata: - name: nginx-udp-ingress-controller - labels: - k8s-app: nginx-udp-ingress-lb - namespace: kube-system -spec: - replicas: 1 - selector: - k8s-app: nginx-udp-ingress-lb - template: - metadata: - labels: - k8s-app: nginx-udp-ingress-lb - name: nginx-udp-ingress-lb - spec: - terminationGracePeriodSeconds: 60 - containers: - - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15 - name: nginx-udp-ingress-lb - readinessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - livenessProbe: - httpGet: - path: /healthz - port: 10254 - scheme: HTTP - initialDelaySeconds: 10 - timeoutSeconds: 1 - env: - - name: POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - ports: - - containerPort: 80 - hostPort: 80 - - containerPort: 443 - hostPort: 443 - - containerPort: 9001 - hostPort: 9001 - protocol: UDP - args: - - /nginx-ingress-controller - - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - - --udp-services-configmap=$(POD_NAMESPACE)/nginx-udp-ingress-configmap diff --git a/pkg/ingress/controller/launch.go b/pkg/ingress/controller/launch.go index f7c89811ab..b81bc7c37c 100644 --- a/pkg/ingress/controller/launch.go +++ b/pkg/ingress/controller/launch.go @@ -319,5 +319,5 @@ func handleFatalInitError(err error) { "This most likely means that the cluster is misconfigured (e.g., it has "+ "invalid apiserver certificates or service accounts configuration). Reason: %s\n"+ "Refer to the troubleshooting guide for more information: "+ - "https://github.com/kubernetes/ingress/blob/master/docs/troubleshooting.md", err) + "https://github.com/kubernetes/ingress-nginx/blob/master/docs/troubleshooting.md", err) } diff --git a/pkg/nginx/controller/nginx.go b/pkg/nginx/controller/nginx.go index dc89f86b2a..988b9f63fd 100644 --- a/pkg/nginx/controller/nginx.go +++ b/pkg/nginx/controller/nginx.go @@ -504,7 +504,7 @@ func (n *NGINXController) UpdateIngressStatus(*extensions.Ingress) []apiv1.LoadB return nil } -// OnUpdate is called by syncQueue in https://github.com/kubernetes/ingress/blob/master/core/pkg/ingress/controller/controller.go#L426 +// OnUpdate is called by syncQueue in https://github.com/kubernetes/ingress-nginx/blob/master/pkg/ingress/controller/controller.go#L426 // periodically to keep the configuration in sync. // // convert configmap to custom configuration object (different in each implementation) @@ -719,7 +719,7 @@ func (n *NGINXController) OnUpdate(ingressCfg ingress.Configuration) error { // nginxHashBucketSize computes the correct nginx hash_bucket_size for a hash with the given longest key func nginxHashBucketSize(longestString int) int { - // See https://github.com/kubernetes/ingress/issues/623 for an explanation + // See https://github.com/kubernetes/ingress-nginxs/issues/623 for an explanation wordSize := 8 // Assume 64 bit CPU n := longestString + 2 aligned := (n + wordSize - 1) & ^(wordSize - 1) diff --git a/rootfs/etc/nginx/template/nginx.tmpl b/rootfs/etc/nginx/template/nginx.tmpl index 6107933b94..cdcce1a1a1 100644 --- a/rootfs/etc/nginx/template/nginx.tmpl +++ b/rootfs/etc/nginx/template/nginx.tmpl @@ -378,7 +378,7 @@ http { server { # Use the port {{ $all.ListenPorts.Status }} (random value just to avoid known ports) as default port for nginx. # Changing this value requires a change in: - # https://github.com/kubernetes/ingress/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go + # https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go listen {{ $all.ListenPorts.Status }} default_server reuseport backlog={{ $all.BacklogSize }}; {{ if $IsIPV6Enabled }}listen [::]:{{ $all.ListenPorts.Status }} default_server reuseport backlog={{ $all.BacklogSize }};{{ end }} set $proxy_upstream_name "-"; diff --git a/rootfs/ingress-controller/clean-nginx-conf.sh b/rootfs/ingress-controller/clean-nginx-conf.sh index 0e109b7dff..412788c022 100755 --- a/rootfs/ingress-controller/clean-nginx-conf.sh +++ b/rootfs/ingress-controller/clean-nginx-conf.sh @@ -3,6 +3,8 @@ # This script removes consecutive empty lines in nginx.conf # Using sed is more simple than using a go regex -# first sed removes empty lines -# second sed command replaces the empty lines -sed -e 's/\r//g' | sed -e 's/^ *$/\'$'\n/g' | sed -e '/^$/{N;/^\n$/d;}' +# Sed commands: +# 1. remove the return carrier character/s +# 2. remove empty lines +# 3. replace multiple empty lines +sed -e 's/\r//g' | sed -e 's/^ *$/\'$'\n/g' | sed -e '/^$/{N;/^\n$/D;}'