-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agent version 8.16.0 has connection issues to NR #2170
Comments
The change you mention is unlikely to have affected the agent. Well, I assume that JFR is on. |
We specify a proxy via NEW_RELIC_PROXY_HOST/NEW_RELIC_PROXY_PORT. |
No, I mean the JFR/Real-time profiling functionality in the Java agent. Does your proxy require username/password? I am also assuming that most of the agent works. That is, when using New Relic's UI, you can see everything, except for Real-time profiling. |
Ah, yes, real-time profiling is enabled, and you're correct that's it's broken. We aren't specifying a proxy_scheme but it seems to work anyway. I don't know what it defaults to. There's no proxy authentication. |
Description
We were using Java agent version 8.15.0 and had no issues. With the only change being an upgrade to 8.16.0, our logs are now full of these messages:
com.newrelic WARN: IOException (message: Connection reset, cause: null) while trying to send data to New Relic. EventBatch retry recommended
com.newrelic INFO: [EventBatch] - Batch sending failed. Backing off 0 MILLISECONDS
com.newrelic WARN: IOException (message: Connection reset, cause: null) while trying to send data to New Relic. EventBatch retry recommended
com.newrelic INFO: [EventBatch] - Batch sending failed. Backing off 0 MILLISECONDS
Also, presumably due to buffering/queueing of metrics, the heap usage gradually goes up to 500 MB.
This is accompanied by messages like so
com.newrelic WARN: Refusing to schedule batch of size 463 (would put us over max size 1000000, available = 410)
Your Environment
Java OpenJDK 17.0.13+11
Using G1 GC
Additional context
Just skimming through the release notes, I would suspect this change is the culprit:
#2082
The text was updated successfully, but these errors were encountered: