-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client fails annoyingly with lots of log messages when server does not speak grpc #120
Comments
On Mon, Mar 16, 2015 at 6:38 PM, jellevandenhooff [email protected]
|
Ah -- I guess the messages themselves might by only a symptom of the underlying problem. I don't know if instantaneous reconnecting is desired behavior, but if it is not, then adding some back-off would probably also make me happy :) Thanks! |
Exponential back-off is already there (https://github.com/grpc/grpc-go/blob/master/clientconn.go#L164). You actually hit a different case: your connect actually succeeded because that port is listening but when you sent the first rpc it got rejected by the peer because the peer does not speak grpc. Therefore, you already hit the initial reconnect interval. |
How do you feel about moving the exponential back-off into clientConn so it also backs-off if a server is crashing or misbehaving? I could try and see what it looks like if you like that idea. |
I was thinking can we reconnect a little bit of more aggressive, I mean, I need to restart client mostly of the time when I restart server doesn't feels right. It killing my productivity. Maybe reconnect if there is new rpc calls in client? Maybe just a little of aggressive, my server has no complain of that. What I want is thought of like zeromq or nanomsg, you don't need to care if server was going down, you just send messages. |
Got the same high volume of error messages under the following scenario:
It looks like there is an infinite loop in the Invoke function in
Setting In this case where the server explicitely closes the connection, because it does not recognise the grpc protocol, the client should not attempt retry. The following test code reproduces the above scenario
|
I also felt this was extremely annoying in a dev environment (I have multiple services connecting to a non-critical grpc service and they would cumulatively spit out over 10k log lines in 5 seconds if I turned that service off), so I just replace the grpclog Logger with grpclog.SetLogger(log.New(ioutil.Discard, "", 0)) when in dev. |
If you replace it with glogger (https://github.com/grpc/grpc-go/blob/master/grpclog/glogger/glogger.go), all the logs will go to some files instead of stderr unless you configure it explicitly. |
I am experiencing same issue and using Would it be possible to add to this log message more context e.g. the hostname or service name? |
It would be very nice if the underlined backoff mechanism covers this case as well. |
is there any update on this? This problem really floods logs when running on GCE. setting the logger seems to provide an initial fix, but feels wrong |
We are working on improving the logging system, #922 is the first step. |
While I appreciate #922, the many log messages are a symptom of many failed connection attempts. I think it'd be worthwhile also reducing connection frequency, because even if we're not spewing logs, we're still hammering some poor unsuspecting service. |
I was going to mention @jellevandenhooff's observation too, I thought there was some sort of exponential backoff involved when reconnecting. |
Can you share more information about your program? Like what the error you got was and what server were you connecting to? What we want to know is the root cause of the connection error. We have backoff mechanism if the connection can't be established at the first place. |
The root cause of the error is exactly as you described, and a backoff as you propose is the solution. My PR from last year sketched out an approach for implementing that, but with all the changes to grpc, doesn't quite merge anymore. |
I assume by this you mean "connection is established successfully, and disconnects immediately after that." |
@menghanl in my case the server did not have properly setup the certificate. |
closing in favor of #954 |
I am running "go version go1.4.1 darwin/amd64". I accidentally pointed a grpc client at an address that didn't speak grpc. Afterwards, grpc printed a lot of messages in the log that were not helpful at best and distracting at worst. I would prefer grpc to a) not generate as many errors, perhaps with some back-off mechanism, and b) not print as many errors.
Specifically, my terminal filled with hundreds of lines of the form
I tried sticking in a "c.failFast = true" in grpc.Invoke, but that did not help.
The text was updated successfully, but these errors were encountered: