-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
metrics from VertxHttpServerMetrics grows until OOM happens #21790
Comments
/cc @ebullient, @jmartisk |
Thanks for reporting @janinko. Do you by any chance have information in your heap dump that shows which fields of |
Which version of Quarkus are you using? There was a fix added to the Json registry awhile ago to correct the depth of the ring buffer. Note that the Json registry is optional, and stores/performs calculations that other registries don't. If you don't need it, I would turn it off. |
Any chance you can test with 2.5 and see if the problem persists? |
We are trying to turn off this config
We can, but it will take some time for us to release and thest the app with quarkus 2.5. There is not so big data flow on our dev/stage instances, so we can only see the problem in production. |
You misunderstood me. This isn't turning off http metrics, I am suggesting that you turn off the json registry if you don't need it: If you need the json registry, please make sure your ring buffer is set to a small number: |
Understood, but won't profiling your staging environment show an ever increasing ring buffer size if the problem persists? |
I don't think anything has changed w.r.t. the Json registry recently, this is a knock-on effect of how the JsonRegistry works (it performs a lot more calculations, and requires more storage). The buffer-length setting is a concern, the setting was added / memory usage fixed in 2.1.0.Final, any release before that could see this runaway memory usage because the ring buffer length was way too big. |
|
Yes. When we added the flag in June, we set the default. So if the version of Quarkus in use is pre-2.1.0.Final, the explosion of data with the Json exporter is a known issue that would be resolved by turning it off. |
This issue seems to be about Quarkus |
Right. The amount of calculation the Json exporter does is not really fixable (aside from ensuring the ring buffer value is small, which it should be). There are a few ways to deal with this:
|
Closing this. Recommendations for how to work around memory usage are given above. |
@ebullient hmmm, if this thing is not usable in production, we should either document it or tweak it or implement a filter by default. But it's definitely not the behavior I would expect and I'm even surprised this can be so much of an issue. |
What's a bit weird is that the size of the ring buffer is |
I tried to debug the creation of the ring buffer and I get 3 for the size, which is what we expect in recent Quarkus versions. @janinko I would be interested in having more information about your setup and how you end up with a ring buffer sized to 1024. I added a breakpoint in |
Describe the bug
We have an application that reads a lot of data over the network. The application crashed several times because of Out Of Memory. Investigating the memory showed that two instances of
io.quarkus.micrometer.runtime.registry.json.JsonDistributionSummary
took 96 % of memory and over 5 days grew from 162.5 MiB to 355.7 MiB, while the rest of the app's memory usage grew by 0.4 MiB.The two
JsonDistributionSummary
were forhttp.server.bytes.read
andhttp.server.bytes.written
Expected behavior
The memory usage is constrained.
Actual behavior
The memory usage grows until OOM error happens.
How to Reproduce?
No response
Output of
uname -a
orver
No response
Output of
java -version
No response
GraalVM version (if different from Java)
No response
Quarkus version or git rev
No response
Build tool (ie. output of
mvnw --version
orgradlew --version
)No response
Additional information
No response
The text was updated successfully, but these errors were encountered: