Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Whitelist-source-range not working properly with PROXY protocol #4305

Closed
pierluigilenoci opened this issue Jul 10, 2019 · 30 comments
Closed

Comments

@pierluigilenoci
Copy link

pierluigilenoci commented Jul 10, 2019

BUG REPORT

NGINX Ingress controller version: 0.25.0
Kubernetes version (use kubectl version): 1.13.6

  • Cloud provider or hardware configuration: AWS + ELB
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)
  • Kernel (e.g. uname -a): 4.9.0-7-amd64
  • Install tools: helm
  • Others:
    The configuration is simple: AWS, ELB in front of Nginx
    Kubernetes installed with KOPS
    Installed with helm with this configuration:
    "use-proxy-protocol": "true" "whitelist-source-range": "<list of offices ips>"

What happened:
I updated a nginx-ingress on a test cluster from v0.24.1 to 0.25.0 with helm.
With version 0.24.1 work fine, with 0.25.0 I get 403 if I try to access the dashboard.

What you expected to happen:
Nothing, only the update of nginx

How to reproduce it (as minimally and precisely as possible):
Update the nginx-ingress

@aledbf
Copy link
Member

aledbf commented Jul 10, 2019

@PierluigiLenociAkelius I cannot reproduce this issue. Please check the ingress controller pod logs to see if you get the real IP address.

@pierluigilenoci pierluigilenoci changed the title Whitelist-source-reage not working properly Whitelist-source-range not working properly Jul 19, 2019
@pierluigilenoci
Copy link
Author

pierluigilenoci commented Jul 24, 2019

Hi @aledbf,
sorry for the delay in the answer. This is my actual values configuration for nginx-ingress helm chart.
https://pastebin.com/DtCQGtp6
I also tried with the default values with the same result.

These are the pod logs:
[nginx-ingress-controller-cd9d5b49d-kbpt8] 2019/07/10 12:49:48 [error] 191#191: *916 access forbidden by rule, client: 10.55.5.100, server: dashboard.devops.k8s.aws.akelius.com, request: "GET / HTTP/2.0", host: "dashboard.devops.k8s.aws.akelius.com" [nginx-ingress-controller-cd9d5b49d-kbpt8] { "time_iso8601": "2019-07-10T12:49:48+00:00", "time_local": "10/Jul/2019:12:49:48 +0000", "core": { "body_bytes_sent": "150", "status": "403", "server_name": "dashboard.devops.k8s.aws.akelius.com", "remote_addr": "10.55.5.100", "remote_user": "", "the_real_ip": "93.188.244.214", "vhost": "dashboard.devops.k8s.aws.akelius.com", "path": "/", "request": "GET / HTTP/2.0", "request_query": "", "request_id": "3922fc62d1e5b16a5cd485368882e6a6", "X-Request-ID": "3922fc62d1e5b16a5cd485368882e6a6", "request_length": "16", "request_time": "0.000", "request_proto": "HTTP/2.0", "request_method": "GET", "http": { "http_referer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:67.0) Gecko/20100101 Firefox/67.0", "http_x_forwarded_for": "" }, "proxy": { "proxy_upstream_name": "kube-system-kubernetes-dashboard-80", "proxy_add_x_forwarded_for": "10.55.5.100" }, "upstream": { "upstream_addr": "", "upstream_response_length": "", "upstream_response_time": "", "upstream_status": "" }, "k8s": { "namespace": "kube-system", "ingress_name": "kubernetes-dashboard", "service_name": "kubernetes-dashboard", "service_port": "80" } } }

Note: 10.55.5.100 is one of the internal IPs of the cluster. I know that 'remote_addr' and 'the_real_ip' should be equal but they aren't. That the entire point.

The same configuration on Azure works, on AWS not.

@pierluigilenoci
Copy link
Author

@bartlettc22 you have the same issue?

@pierluigilenoci
Copy link
Author

pierluigilenoci commented Jul 26, 2019

@aledbf knock knock

@embik
Copy link
Member

embik commented Jul 30, 2019

I can confirm this issue. It appears that 0.25.0 is matching the wrong IP against the whitelist when using the PROXY protocol. It matches the load balancer IP against cidrs defined by the ingress annotation. Afterwards, it logs the correct client ip though.

This is a redacted part of our logs with 0.25.0:

2019/07/30 14:16:36 [error] 115#115: *2569 access forbidden by rule, client: <LOAD BALANCER IP>, server: <HOST>, request: "GET / HTTP/2.0", host: "<HOST>"
<REAL CLIENT IP> - [<REAL CLIENT IP>] - - [30/Jul/2019:14:16:36 +0000] "GET / HTTP/2.0" 403 150 "-" "curl/7.52.1" 47 0.000 [<TARGET SERVICE>] [] - - - - b09fd9ea322b26b20463ff9952dc65f6

If my Ingress is annotated with nginx.ingress.kubernetes.io/whitelist-source-range=<REAL CLIENT IP>/32, this is the result. If I update the annotation to nginx.ingress.kubernetes.io/whitelist-source-range=<LOAD BALANCER IP>/32, it allows access (but for every client, since it's the load balancer's IP).

Our setup is not unusual, it's a TCP HAProxy with PROXY protocol enabled in front of nginx-ingress. This issue might break white- and blacklists for everyone running a similiar setup.

Edit: Our Helm values for the stable/nginx-ingress Chart; Chart version 1.11.4:

controller:
  config:
    use-proxy-protocol: "True"
    proxy-real-ip-cidr: "<LOAD BALANCER IP>/32"
    server-tokens: "false"
  kind: DaemonSet
  service:
    type: NodePort
    nodePorts:
      http: 31080
      https: 31443

@pierluigilenoci pierluigilenoci changed the title Whitelist-source-range not working properly Whitelist-source-range not working properly with PROXY protocol Jul 30, 2019
@aledbf
Copy link
Member

aledbf commented Aug 2, 2019

@PierluigiLenociAkelius I am sorry but I cannot reproduce this issue

  1. create a cluster in aws and install the ingress controller https://gist.github.com/aledbf/ed958f64b48e50b038863e6bf8a9186b
  2. deploy the echoheaders demo https://gist.github.com/aledbf/24bb508f5b5c9ee668491a84ec9c5641
  3. verify it's working
curl ab5b8dc46b57411e9871a06d3f50a37e-350911647.us-west-2.elb.amazonaws.com -H 'Host: foo.bar'


Hostname: http-svc-79b6bb8bb8-vc5lb

Pod Information:
	node name:	ip-172-20-44-198.us-west-2.compute.internal
	pod name:	http-svc-79b6bb8bb8-vc5lb
	pod namespace:	default
	pod IP:	100.96.1.7

Server values:
	server_version=nginx: 1.12.2 - lua: 10010

Request Information:
	client_address=100.96.1.6
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://foo.bar:8080/

Request Headers:
	accept=*/*
	host=foo.bar
	user-agent=curl/7.64.0
	x-forwarded-for=200.30.255.93
	x-forwarded-host=foo.bar
	x-forwarded-port=80
	x-forwarded-proto=http
	x-original-uri=/
	x-real-ip=200.30.255.93
	x-request-id=d5110dbd6706d7303e7f0a139f26fdbe
	x-scheme=http

Request Body:
	-no body in request-
  1. configure a whitelist whitelist-source-range: 127.0.0.1/32
k get configmap -n ingress-nginx nginx-configuration -o yaml
apiVersion: v1
data:
  use-proxy-protocol: "true"
  whitelist-source-range: 127.0.0.1/32
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: nginx-configuration
  namespace: ingress-nginx
  1. check 403 is returned
curl ab5b8dc46b57411e9871a06d3f50a37e-350911647.us-west-2.elb.amazonaws.com -H 'Host: foo.bar'
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
  1. check the IP address in the logs
k logs -f -n ingress-nginx   nginx-ingress-controller-fd96b4f85-phkkr 
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.25.0
  Build:      git-1387f7b7e
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

W0802 22:27:41.778131       8 flags.go:221] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: openresty/1.15.8.1
W0802 22:27:41.781302       8 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0802 22:27:41.781453       8 main.go:183] Creating API client for https://100.64.0.1:443
I0802 22:27:41.793079       8 main.go:227] Running in Kubernetes cluster version v1.13 (v1.13.5) - git (clean) commit 2166946f41b36dea2c4626f90a77706f426cdea2 - platform linux/amd64
I0802 22:27:42.027366       8 main.go:102] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
E0802 22:27:42.028427       8 main.go:131] v1.13.5
W0802 22:27:42.028457       8 main.go:106] Using deprecated "k8s.io/api/extensions/v1beta1" package because Kubernetes version is < v1.14.0
I0802 22:27:42.046597       8 nginx.go:275] Starting NGINX Ingress controller
I0802 22:27:42.070462       8 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"b136d328-b574-11e9-871a-06d3f50a37ea", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0802 22:27:42.070535       8 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"b080cb81-b574-11e9-871a-06d3f50a37ea", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0802 22:27:42.070678       8 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"b01e00a1-b574-11e9-871a-06d3f50a37ea", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0802 22:27:43.247098       8 nginx.go:319] Starting NGINX process
I0802 22:27:43.247231       8 leaderelection.go:235] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
I0802 22:27:43.247568       8 controller.go:133] Configuration changes detected, backend reload required.
I0802 22:27:43.252898       8 leaderelection.go:245] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0802 22:27:43.252975       8 status.go:86] new leader elected: nginx-ingress-controller-fd96b4f85-phkkr
I0802 22:27:43.305632       8 controller.go:149] Backend successfully reloaded.
I0802 22:27:43.305704       8 controller.go:158] Initial sync, sleeping for 1 second.
[02/Aug/2019:22:27:44 +0000]TCP200000.000
I0802 22:29:17.477627       8 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"http-svc", UID:"f6e90d6d-b574-11e9-871a-06d3f50a37ea", APIVersion:"extensions/v1beta1", ResourceVersion:"881", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/http-svc
W0802 22:29:20.438313       8 controller.go:878] Service "default/http-svc" does not have any active Endpoint.
I0802 22:29:20.438394       8 controller.go:133] Configuration changes detected, backend reload required.
I0802 22:29:20.496851       8 controller.go:149] Backend successfully reloaded.
[02/Aug/2019:22:29:20 +0000]TCP200000.000
W0802 22:29:23.771654       8 controller.go:878] Service "default/http-svc" does not have any active Endpoint.
[02/Aug/2019:22:29:37 +0000]TCP200000.000
I0802 22:29:43.260462       8 status.go:309] updating Ingress default/http-svc status from [] to [{ ab5b8dc46b57411e9871a06d3f50a37e-350911647.us-west-2.elb.amazonaws.com}]
I0802 22:29:43.263754       8 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"http-svc", UID:"f6e90d6d-b574-11e9-871a-06d3f50a37ea", APIVersion:"extensions/v1beta1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/http-svc
200.30.255.93 - [200.30.255.93] - - [02/Aug/2019:22:30:13 +0000] "GET / HTTP/1.1" 200 735 "-" "curl/7.64.0" 71 0.001 [default-http-svc-8080] [] 100.96.1.7:8080 735 0.000 200 5b44b7242058c2ffa36feb3e93800e38
I0802 22:32:02.040078       8 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"b01e00a1-b574-11e9-871a-06d3f50a37ea", APIVersion:"v1", ResourceVersion:"1124", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap ingress-nginx/nginx-configuration
I0802 22:32:02.042545       8 controller.go:133] Configuration changes detected, backend reload required.
I0802 22:32:02.098852       8 controller.go:149] Backend successfully reloaded.
[02/Aug/2019:22:32:02 +0000]TCP200000.000
2019/08/02 22:32:12 [error] 327#327: *3423 access forbidden by rule, client: 200.30.255.93, server: foo.bar, request: "GET / HTTP/1.1", host: "foo.bar"
200.30.255.93 - [200.30.255.93] - - [02/Aug/2019:22:32:12 +0000] "GET / HTTP/1.1" 403 159 "-" "curl/7.64.0" 71 0.000 [default-http-svc-8080] [] - - - - 41305c228c991312e6cd7c9ce9d16173
2019/08/02 22:32:23 [error] 330#330: *3565 access forbidden by rule, client: 200.30.255.93, server: foo.bar, request: "GET / HTTP/1.1", host: "foo.bar"
200.30.255.93 - [200.30.255.93] - - [02/Aug/2019:22:32:23 +0000] "GET / HTTP/1.1" 403 159 "-" "curl/7.64.0" 71 0.000 [default-http-svc-8080] [] - - - - 0005bbbdff25c922fee833082edac0b8
I0802 22:34:58.781679       8 event.go:258] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"b01e00a1-b574-11e9-871a-06d3f50a37ea", APIVersion:"v1", ResourceVersion:"1378", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap ingress-nginx/nginx-configuration
I0802 22:34:58.784044       8 controller.go:133] Configuration changes detected, backend reload required.
I0802 22:34:58.842176       8 controller.go:149] Backend successfully reloaded.
[02/Aug/2019:22:34:58 +0000]TCP200000.000
200.30.255.93 - [200.30.255.93] - - [02/Aug/2019:22:36:34 +0000] "GET / HTTP/1.1" 200 735 "-" "curl/7.64.0" 71 0.001 [default-http-svc-8080] [] 100.96.1.7:8080 735 0.000 200 d5110dbd6706d7303e7f0a139f26fdbe

@embik
Copy link
Member

embik commented Aug 8, 2019

Hi @aledbf are you able to reproduce this with a static HAProxy configuration? I posted my Helm Chart values above, this is a minimal snippet for HAProxy that mirrors our setup:

frontend kubernetes_ingress_HTTP
    bind 0.0.0.0:80
    mode tcp

    default_backend k8s_ingress_http

frontend kubernetes_ingress_HTTPS
    bind 0.0.0.0:443
    mode tcp

    default_backend k8s_ingress_https

backend k8s_ingress_http
    mode tcp

    default-server inter 1s rise 2 fall 3
    server node01 <node01 IP>:31080 send-proxy
    server node02 <node02 IP>:31080 send-proxy
    server node03 <node03 IP>:31080 send-proxy
    
backend k8s_ingress_https
    mode tcp

    default-server inter 1s rise 2 fall 3
    server node01 <node01 IP>:31443 send-proxy
    server node02 <node02 IP>:31443 send-proxy
    server node03 <node03 IP>:31443 send-proxy

(our setup works pre 0.25.0 and stops working when updating to 0.25.0 without any other changes)

@jurgenweber
Copy link

jurgenweber commented Aug 9, 2019

yeah, we are getting this also.

I did no investigation because I did not have the time, but the new version (0.25.0) I get 403'd but the old version works fine(0.24.0).

@pierluigilenoci
Copy link
Author

@aledbf knock knock... sorry to bother you but we are growing 👯‍♂

@aledbf
Copy link
Member

aledbf commented Aug 20, 2019

knock knock... sorry to bother you but we are growing dancing_men

Again, I cannot reproduce this issue. Please check my comment #4305 (comment)
If you can provide something like that reproducing the issue, glad to check.

@embik
Copy link
Member

embik commented Aug 26, 2019

@aledbf hi, have you tried reproducing this with the Helm Chart config (#4305 (comment)) and HAProxy config (#4305 (comment)) I shared?

This issue is still present for us and prevents us from updating to 0.25.1 and therefore incorporating the CVE patches.

@pierluigilenoci
Copy link
Author

@aledbf could you please try to install nginx-ingress with helm? This is my configuration https://pastebin.com/DtCQGtp6 of course you need to add a proper IPs list and proxy-real-ip-cidr.

@pierluigilenoci
Copy link
Author

Maybe @ElvinEfendi can help?

@ElvinEfendi
Copy link
Member

"proxy-real-ip-cidr":
XX.XXX.XXX.XXX/32,
XX.XXX.XXX.XXX/32,
XX.XXX.XXX.XXX/32

@PierluigiLenociAkelius do you have load balancer IP configured in here?

@pierluigilenoci
Copy link
Author

pierluigilenoci commented Sep 4, 2019

do you have load balancer IP configured in here?
@ElvinEfendi Yes

@XciD
Copy link

XciD commented Sep 17, 2019

I've the same symptoms on my side.
What could I provide to help you ?

I'm using OVH Manages Kubernetes with an HAProxy in front of the nginx-ingress-controller.

Confirm that it work with 0.24.1 version.

@pierluigilenoci
Copy link
Author

@ElvinEfendi could you please help us?

@pierluigilenoci
Copy link
Author

@XciD if you want you can provide a way to replicate your configuration.

@pierluigilenoci
Copy link
Author

This could be related #4401

@XciD
Copy link

XciD commented Sep 17, 2019

It's quite the same as yours:

controller:
    service:
      externalTrafficPolicy: Local
      annotations:
        service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v1"
    config:
      use-proxy-protocol: "true"
      proxy-real-ip-cidr: "10.108.0.0/14"
      use-forwarded-headers: "true"
      server-tokens: "false"
      http-snippet: |
        geo $realip_remote_addr $is_lb {
          default       0;
          10.108.0.0/14 1;
        }
      server-snippet: |
        if ($is_lb != 1) {
          return 403;
        }
      whitelist-source-range: "X.X.X.X/32"

I'm using OVH Managed Kubernetes, It's a free Managed Cluster, you just need to add nodes, I can provide coupon in order to test if needed. (I work @ovh)

@aledbf
Copy link
Member

aledbf commented Sep 17, 2019

To those affected by this issue, please use the image quay.io/kubernetes-ingress-controller/nginx-ingress-controller:dev (current master) that contains the refactoring #4557

@XciD
Copy link

XciD commented Sep 17, 2019

Works =D

Any date for the release ?

@aledbf
Copy link
Member

aledbf commented Sep 17, 2019

Maybe the end of the week.

@embik
Copy link
Member

embik commented Sep 17, 2019

I'm ashamed to admit it, but at least from my side this issue does not apply anymore as well, but not because of #4557 but rather because I finally added externalTrafficPolicy: Local to my Helm values. Classic "didn't fully read the docs" on my side, sorry.

That being said something changed regarding that from 0.24.x to 0.25.x because it worked before. Just as an interesting tidbit. I guess it should have not worked in the first place.

@aledbf
Copy link
Member

aledbf commented Sep 17, 2019

@embik thank you for the update

@aledbf
Copy link
Member

aledbf commented Sep 28, 2019

Closing. Please update to 0.26.0. The release contains several fixes related to the extraction of the source IP address and whitelists.
Please reopen if the issue persists

@aledbf aledbf closed this as completed Sep 28, 2019
@pierluigilenoci
Copy link
Author

@aledbf the problem is still there.

@pierluigilenoci
Copy link
Author

@aledbf Before closing this issue it was not better to wait for me to check if the problem was solved?

@pierluigilenoci
Copy link
Author

pierluigilenoci commented Sep 30, 2019

I found a possible workaround to have it working. Configure "proxy-real-ip-cidr": "0.0.0.0/0" but I feel this solution really wrong.

In our configuration, we listed here 3 NAT gateway (1 for every AZ) of our minions.

@unfor19
Copy link

unfor19 commented Jan 21, 2020

I'm using the helm chart, so this worked for me (Reference: ingress-nginx/issues/3857)

Ommited the LoadBalancer type, since the default is ELB

controller:
  config:
    use-proxy-protocol: "true"
    real-ip-header: "proxy_protocol"
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <ACM CERTIFICATE>
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants