-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Whitelist-source-range not working properly with PROXY protocol #4305
Comments
@PierluigiLenociAkelius I cannot reproduce this issue. Please check the ingress controller pod logs to see if you get the real IP address. |
Hi @aledbf, These are the pod logs: Note: 10.55.5.100 is one of the internal IPs of the cluster. I know that 'remote_addr' and 'the_real_ip' should be equal but they aren't. That the entire point. The same configuration on Azure works, on AWS not. |
@bartlettc22 you have the same issue? |
@aledbf knock knock |
I can confirm this issue. It appears that 0.25.0 is matching the wrong IP against the whitelist when using the PROXY protocol. It matches the load balancer IP against cidrs defined by the ingress annotation. Afterwards, it logs the correct client ip though. This is a redacted part of our logs with 0.25.0:
If my Ingress is annotated with Our setup is not unusual, it's a TCP HAProxy with PROXY protocol enabled in front of nginx-ingress. This issue might break white- and blacklists for everyone running a similiar setup. Edit: Our Helm values for the controller:
config:
use-proxy-protocol: "True"
proxy-real-ip-cidr: "<LOAD BALANCER IP>/32"
server-tokens: "false"
kind: DaemonSet
service:
type: NodePort
nodePorts:
http: 31080
https: 31443 |
@PierluigiLenociAkelius I am sorry but I cannot reproduce this issue
|
Hi @aledbf are you able to reproduce this with a static HAProxy configuration? I posted my Helm Chart values above, this is a minimal snippet for HAProxy that mirrors our setup:
(our setup works pre 0.25.0 and stops working when updating to 0.25.0 without any other changes) |
yeah, we are getting this also. I did no investigation because I did not have the time, but the new version (0.25.0) I get 403'd but the old version works fine(0.24.0). |
@aledbf knock knock... sorry to bother you but we are growing 👯♂ |
Again, I cannot reproduce this issue. Please check my comment #4305 (comment) |
@aledbf hi, have you tried reproducing this with the Helm Chart config (#4305 (comment)) and HAProxy config (#4305 (comment)) I shared? This issue is still present for us and prevents us from updating to 0.25.1 and therefore incorporating the CVE patches. |
@aledbf could you please try to install nginx-ingress with helm? This is my configuration https://pastebin.com/DtCQGtp6 of course you need to add a proper IPs list and proxy-real-ip-cidr. |
Maybe @ElvinEfendi can help? |
@PierluigiLenociAkelius do you have load balancer IP configured in here? |
|
I've the same symptoms on my side. I'm using OVH Manages Kubernetes with an HAProxy in front of the nginx-ingress-controller. Confirm that it work with |
@ElvinEfendi could you please help us? |
@XciD if you want you can provide a way to replicate your configuration. |
This could be related #4401 |
It's quite the same as yours: controller:
service:
externalTrafficPolicy: Local
annotations:
service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v1"
config:
use-proxy-protocol: "true"
proxy-real-ip-cidr: "10.108.0.0/14"
use-forwarded-headers: "true"
server-tokens: "false"
http-snippet: |
geo $realip_remote_addr $is_lb {
default 0;
10.108.0.0/14 1;
}
server-snippet: |
if ($is_lb != 1) {
return 403;
}
whitelist-source-range: "X.X.X.X/32" I'm using OVH Managed Kubernetes, It's a free Managed Cluster, you just need to add nodes, I can provide coupon in order to test if needed. (I work @ovh) |
To those affected by this issue, please use the image |
Works =D Any date for the release ? |
Maybe the end of the week. |
I'm ashamed to admit it, but at least from my side this issue does not apply anymore as well, but not because of #4557 but rather because I finally added That being said something changed regarding that from 0.24.x to 0.25.x because it worked before. Just as an interesting tidbit. I guess it should have not worked in the first place. |
@embik thank you for the update |
Closing. Please update to 0.26.0. The release contains several fixes related to the extraction of the source IP address and whitelists. |
@aledbf the problem is still there. |
@aledbf Before closing this issue it was not better to wait for me to check if the problem was solved? |
I found a possible workaround to have it working. Configure In our configuration, we listed here 3 NAT gateway (1 for every AZ) of our minions. |
I'm using the helm chart, so this worked for me (Reference: ingress-nginx/issues/3857) Ommited the LoadBalancer type, since the default is ELB
|
BUG REPORT
NGINX Ingress controller version: 0.25.0
Kubernetes version (use
kubectl version
): 1.13.6uname -a
): 4.9.0-7-amd64The configuration is simple: AWS, ELB in front of Nginx
Kubernetes installed with KOPS
Installed with helm with this configuration:
"use-proxy-protocol": "true" "whitelist-source-range": "<list of offices ips>"
What happened:
I updated a nginx-ingress on a test cluster from v0.24.1 to 0.25.0 with helm.
With version 0.24.1 work fine, with 0.25.0 I get 403 if I try to access the dashboard.
What you expected to happen:
Nothing, only the update of nginx
How to reproduce it (as minimally and precisely as possible):
Update the nginx-ingress
The text was updated successfully, but these errors were encountered: