-
-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use sendmmsg to reduce system call overhead #491
Conversation
Looks like macOS doesn't expose this; there's |
d78d435
to
3734956
Compare
Using the fallback for now, since |
de70d6d
to
8b9df23
Compare
Investigation suggests that the remaining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looking good, some questions.
623c73d
to
ae3b0ea
Compare
|
I noticed
sendmsg
was 35% of the flame graph for the large streams benchmark, so I finally had a go at this. It yields a roughly 15-30% improvement in each one of the throughput benchmarks! Oddly,sendmmsg
is still about 35% of the flame graph, and cranking the batch size way up doesn't seem to help any. We'll probably need to dive into libc or the kernel or even investigate generic segmentation offload or similar to make further headway on this particular bottleneck.