-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batching buffer with minChunkSize #32858
Conversation
@Sanne @geoand @stuartwdouglas the implementation can already be reviewed actually - apart from the questions at #32858 (comment) |
If this would work @Sanne the change at https://github.com/TechEmpower/FrameworkBenchmarks/pull/7861/files#diff-ea8995e85f3d4bb82b2c8aa7feab84fb8d193519e3f919c7d43c3c2c25310344R31-R32 won't be impactfull anymore and real users as well would benefit from not using 8K Netty (pooled) direct buffer sized for each response, as now |
Thanks @franz1981! Let's move this ahead in a couple weeks time :) |
...active/server/vertx/src/main/java/org/jboss/resteasy/reactive/server/vertx/AppendBuffer.java
Show resolved
Hide resolved
I'd like to see how this performs with small / medium and large payloads compared to what we have currently |
Large ones should look similars (if they approach the configured buffer size ie 8K), you'll get better and better as smaller single payload are placed, but you will get worse If users perform many small ones writes to the output stream. |
Many small ones would only happen in SSE or other streaming scenarios, definitely not a major concern |
I've spent few weeks testing with our CI using Techempower for evident regressions with both stacks (reactive/blocking) with/without hibernate and I didn't had any performance regression nor improvement, throughput-wise. What this patch can deliver instead, is a very tangible improvement in the off-heap/direct memory usage in case of blocking endpoints, instead:
This last behavior make very important to NOT over-size such allocation buffer, because it determines how much overall in-flight bytes can be required to be allocated to the Netty allocator, that will grow to accomodate the incoming load. In a JSON hello world load test using a blocking endpoint the saving of RSS has been of ~2MB vs ~70MB. |
This is a very significant improvement! Definitely makes the PR worthy of inclusion :). Thanks a lot of this! |
I've just to decide what's the
The logic is explained https://github.com/netty/netty/blob/eb3feb479826949090a9a55c782722ece9b42e50/buffer/src/main/java/io/netty/buffer/SizeClasses.java and https://github.com/netty/netty/blob/eb3feb479826949090a9a55c782722ece9b42e50/buffer/src/main/java/io/netty/buffer/SizeClasses.java#L83 is the initial log2 step ie 4 (2^4 = 16 - that's the step between the different size classes) |
What is holding this up? Any concerns? |
I've run all possible tests to be sure no evident regressions were there and now I'm addressing the leaks in case of exceptions thrown |
Yup, I've already talked with @franz1981 and once he is ready, we can rebase onto |
...active/server/vertx/src/main/java/org/jboss/resteasy/reactive/server/vertx/AppendBuffer.java
Outdated
Show resolved
Hide resolved
The original code, given that was using a single buffer, was correctly handling releases of resources, so it has taken me a while to ensure a similar behavior now that we can have few chunks instead.... @geoand
|
I don't think we need this to be configurable - at least not for the time being. If we do in the future come up with specific scenarios (I mean code that users actually run) were configuration would be beneficial, we can add it (along with some good documentation). |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Force pushed an update that makes the |
✔️ The latest workflow run for the pull request has completed successfully. It should be safe to merge provided you have a look at the other checks in the summary. |
Merging, thanks a ton for this @franz1981! |
Fixes #32546
This is still in draft due to few still opened questions:
flushBuffer
? it should cleanup its whole content, loosing it? What's supposed to do the caller?ResteasyReactiveOutputStream
in case of failure while using the buffer returned byflushBuffer
? releasing it if not already released? (in the previous impl it wasn't well defined in each state)