-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use heap buffers in the default payload decoder #945
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. There is nothing better than production tested improvement. Thank you @SerCeMan for sharing your prod observations!
Also, can you specify what JDK version did you use, have you checked the same on the other JDK versions?
Can that be tunned using JVM props?
Just wanna make sure that other JDK versions/setups will be affected in a positive way as well.
Also, just curious, why don't you use the |
I believe that the behavior would manifest on any jdk, however, the effect is likely to be different and depends on GC as different GCs handle weak references differently. In our case, we use ShenandoahGC on this service, and the direct buffers here were a cause of much longer pauses than it would be otherwise. If I understand correctly, the instances can also run out of memory when using direct buffers on any JVM (there is a more detailed description of the effect in "2. OutOfMemoryError: Direct Buffer Memory"). I was also planning to publish a small blog post on this as the debugging story was quite interesting and the behavior can be reproduced relatively easily.
We use it in a different setup, however, in this case, we haven't switched yet as it requires more careful ref management. |
@SerCeMan Does it makes sense to use similar buffer creation to what is done in netty? E.g. having a property which specifies how to create ByteBuffer. For example, we can safely rely on Netties |
Also, if you use it in 1.0.x, I suggest setting |
If I understand correctly, it would still require releasing the buffers using PlatformDependent.freeByteBuffer to release the underlying native memory proactively while the goal of
Thanks! We're using 1.0.2 but since we've already supplied a "custom" |
I have been talking about the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is even a surprise to me that it's using direct memory. It also wasn't always the case. It looks like it changed some time before 1.0. To me the point of DefaultPayloadDecoder
is to provide an alternative to using direct memory at the cost of avoiding zero copy.
Ah, I see. However, I'm not 100% sure about the use case for |
Signed-off-by: Sergey Tselovalnikov <[email protected]>
f4c7bf2
to
3e7c86a
Compare
Thanks! Done. |
@SerCeMan merged and ported to master! Thank you for your contribution! |
Hey, folks!
This PR proposes changing
DefaultPayloadDecoder
to use on-heap buffers instead of off-heap buffers.The reason for this is that off-heap buffers release the underlying buffer using cleaner when the ByteBuffer's reference is gone. Due to direct buffers, it's much harder to predict what the actual memory consumption of the app is as there are no guarantees on when the cleaner is run and when the references are actually released. Additionally,
jdk.internal.ref.Cleaner
increases the GC pressure. In our case, replacingallocateDirect
withallocate
resulted in a significant drop of the length of GC pauses as on the graphs below.In my understanding, the change should also positively affect performance in the case when the buffers are small. For instance, in the case where the server handles a large number of mostly idle connections and mostly processes small payload and keepalive frames as the buffers be allocated directly in TLAB and not as a chunk of global memory.
I believe that it would be preferable for the default generic purpose mode to allow for predictability, so I'm proposing this change.