You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When connecting to a gRPC backend through a load balancer like HA Proxy, if the load balancer decides to close the connection immediately (e.g. because there is no backend available), gRPC will attempt to reconnect immediately.
What did you expect to see?
gRPC should back off before trying to reconnect
What did you see instead?
gRPC is tying to reconnect thousands of times per second.
The issue comes from the loop in transportMonitor() in clientconn.go. When the backend closes the connection, we wake up in case <-t.Error(): with an EOF error, then we do if err := ac.resetTransport(false); err != nil { and this works because we are able to connect to the load balancer again, so then we loop again. But the new connection gets almost instantaneously closed again by the load balancer, leading to a quasi-busy loop in transportMonitor() and a ton of log spam.
The text was updated successfully, but these errors were encountered:
Please answer these questions before submitting your issue.
What version of gRPC are you using?
Commit bb78878
What version of Go are you using (
go version
)?Go 1.9
What did you do?
When connecting to a gRPC backend through a load balancer like HA Proxy, if the load balancer decides to close the connection immediately (e.g. because there is no backend available), gRPC will attempt to reconnect immediately.
What did you expect to see?
gRPC should back off before trying to reconnect
What did you see instead?
gRPC is tying to reconnect thousands of times per second.
The issue comes from the loop in
transportMonitor()
inclientconn.go
. When the backend closes the connection, we wake up incase <-t.Error():
with an EOF error, then we doif err := ac.resetTransport(false); err != nil {
and this works because we are able to connect to the load balancer again, so then we loop again. But the new connection gets almost instantaneously closed again by the load balancer, leading to a quasi-busy loop intransportMonitor()
and a ton of log spam.The text was updated successfully, but these errors were encountered: