-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http2 keepalive issue when mixing http2 and http1 which are routing to the same pod port #4836
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
can we reopen this? |
I am still looking for some solution/workaround for this issue. Can this issue be reopened? |
I know nothing about multiplexing http1 & http2 on same port.
Also curious if the ArgoCD was installed with custom configuration to that uses one single port 8080, on the argocd-server pod, for both http & as well as grpc |
I am experiencing the same issue and might have found the solution. In my case I wanted to expose a single endpoint serving HTTP, WebSockets and gRPC. Without the I created two ingresses, one matching the gRPC endpoint paths with the When only making HTTP and WebSockets requests, all would work fine. However, I noticed that the first gRPC request(s) would fail after which they would succeed. After making some gRPC requests I noticed the same for the HTTP and WebSocket requests. Again, a few requests later these would work. Looking at the configuration that the
My assumptionMy assumption is that the error is caused by the Possible solutionI'm currently testing overriding these settings using a ConfigMap: apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
upstream-keepalive-requests: "1" Initially I tried This seems to be working for me! Note: it took me some time to figure out the correct
The correct |
Can we reopen this? |
@thetruechar: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can we reopen this ? We have to do some nasty workaround :
|
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
NGINX Ingress controller version:
0.26.1
Kubernetes version (use
kubectl version
):v1.13.11-gke.14
Environment:
Cloud provider or hardware configuration:
GKE cluster at GCP
OS (e.g. from /etc/os-release):
COS
Kernel (e.g.
uname -a
):Install tools:
Others:
What happened:
We are using the following 2 ingress resources in a GKE cluster (taken from the documentation of ArgoCD with example hostnames):
Both ingress resources are routing to the following k8s service:
That way nginx ingress terminates the SSL session and forwards traffic to the target pod. The application in the target pod then decides on port 8080 if the traffic is http1 or http2 (grpc).
We noticed that when we do grpc calls to the "cli.examplehost.example.com" ingress we sometimes get an error with status code 500. A further call to the same endpoint afterwards works without an issue.
In the log of the nginx ingress controller we see the following message in the case of an error:
It looks like there is an issue with the keepalive session from nginx to the target pod. We only see the issue when using the grpc ingress host (e.g. calls to the https ingress resource always work).
What you expected to happen:
I do not get an error now and then when doing requests to the grpc ingress.
How to reproduce it (as minimally and precisely as possible):
Have a target application which can server http2 and http1 on the same port (like ArgoCD). Then create 2 ingress resources (one for grpc and one for https) which are routing to the same target pod and port.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: