-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
benchmark: use default buffer sizes #5762
benchmark: use default buffer sizes #5762
Conversation
|
The test failure seems unrelated to the change. I'm running the benchmarks via |
The read and write buffer sizes are currently hardcoded to 128KB instead of the default 32KB. This can potentially lead to misleading benchmark results in the most common case, since most users presumably use the default value. This change removes the custom value for read and write transport buffer sizes, both client and server side, so that benchmarks use the default values instead.
9b8beba
to
7e66658
Compare
@dfawley : For second set of eyes. |
I ran the benchmarks with the default values (resp/req size of 0, 1KB, 1MB - latency of 0ms - 40ms - max concurrency: 0, 8, 64, 512 - bandwidth 0 - 10Mbps). It's a lot of benchmarks - it takes 5h to run on an m5.2xl AWS instance, and results are difficult to analyse at least with the tools I currently have (I'm just diffing stdout). Overall the results are as expected:
I don't think it's worth spending much more time looking at the benchmark results as part of this change. I plan to add a test dimension for the buffer sizes: #5757 to look into this further. The results are here: |
The read and write buffer sizes are currently hardcoded to 128KB instead of the default 32KB. This can potentially lead to misleading benchmark results in the most common case, since most users presumably use the default value.
This change removes the custom value for read and write transport buffer sizes, both client and server side, so that benchmarks use the default values instead.
RELEASE NOTES: none