-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support grpc keep alive server parameters #4402
Comments
nginx.ingress.kubernetes.io/server-snippet: | this save my day, thank you guy! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
i think i want to have a better look at this, since i ran into the same issue. /remove-lifecycle rotten |
/reopen |
@PI-Victor: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is there an alternative to this? Because of kubernetes/kubernetes#126811 server-snippets are disabled in my cluster (RKE2). |
I am not sure, if this is related. But I TLS terminate a gRPC upstream with NGINX Ingress. After 1 minute of inactivity the stream stops regardless of the idle timeout, connection timeout and whatever timeout I specify on the client side when opening the stream. The gRPC server logs clearly show a 1 min timeout which is NOT specified anywhere in the configurations or call options for the service. Furthermore I cannot reproduce the behaviour without the NGINX Ingress. So I suspect that NGINX messes up the call options for the gRPC stream during TLS termination. My ingress annotations are as simple as:
Any idea on avoiding this? |
Hi @danielleiszen, have any idea avoiding this? I meet the same problem |
|
Hi @chenchengfa93, I ended up doing something similar. I created a keep alive endpoint on my service that I call periodically from the client. The keep alive triggers a downstream communication and keeps the channel open. The client schedules the next keep alive call only when that downstream event arrives. This seems to work. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST
NGINX Ingress controller version:
Kubernetes version (use
kubectl version
):Environment:
AWS, Rancher
What happened:
On go backend I have keep alive policy
My goal is to have long lived bidi streaming rpc so client can accept incoming updates from the backend. Also if client is disconnected (let's say internet connection is disabled) I want server to determine this as fast as possible (ideally 10 seconds). Currently my grpc server is doing keep alive ping each 10 seconds and ngnix proxy is doing ack of the ping but ngnix itself is not pinging client.
What you expected to happen:
I expect to have settings on ngnix ingress to allow setup grpc keep alive policy, something like
Or even better to just forward grpc ping frames to the client.
I found similar issue on envoy proxy but it seems to be fixed now envoyproxy/envoy#2086
The text was updated successfully, but these errors were encountered: