-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Perf: Use fewer slices in while loop #2
Conversation
Should boost performance significantly for larger incoming buffers.
@@ -30,13 +30,15 @@ module.exports = function block (size, opts) { | |||
bufferedBytes += data.length | |||
buffered.push(data) | |||
|
|||
var b = Buffer.concat(buffered) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we canavoid this allocation as well, by just queuing new buffers directly from the array I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually we can't, never mind. the overhead of slicing into the buffers wouldn't be worth it probably
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm. It's probably only worth looking at with a specific test case where it suffers from that.
In general I assume the new worst case scenario will be when you require 2 buffers from your source on average to create one buffer out and are receiving a high volume of buffers. For example 130B buffers in and 250B buffers out. For every pair it would concat once, then slice twice.
The only way I see that improving is by tracking the buffer offsets as well and keeping a little more data in the cache than strictly necessary. Never using slice or concat at all, but allocating the full output buffer at once and doing (partial) copies into that.
Thanks! I will do a rrlease tomorrow |
@Beanow did you see perf improvements? |
@diasdavid night and day difference. As per ipfs-inactive/js-ipfs-unixfs-engine#187 |
Should boost performance significantly for larger incoming buffers.
This fixes the issue for me at ipfs-inactive/js-ipfs-unixfs-engine#187
The way I implemented my test, a single buffer of 57621555 bytes comes in and gets split into ~220 blocks. By not slicing the remainder to a new buffer 219 times it saves a lot of buffer allocation churn to process a buffer as large as this.