-
-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High CPU Usage (100%) in Specific Scenarios #781
Comments
Hm, good point, it seems that if a call to |
niceskylei
pushed a commit
to niceskylei/h2
that referenced
this issue
Jun 3, 2024
niceskylei
pushed a commit
to niceskylei/h2
that referenced
this issue
Jun 3, 2024
seanmonstar
pushed a commit
that referenced
this issue
Jun 3, 2024
Some operating systems will allow you continually call `write()` on a closed socket, and will return `Ok(0)` instead of an error. This patch checks for a zero write, and instead of looping forever trying to write, returns a proper error. Closes #781 Co-authored-by: leibeiyi <[email protected]>
cxw620
pushed a commit
to cxw620/h2
that referenced
this issue
Jan 20, 2025
* v0.3.26 * Rename project to `rh2` * Refactor frame sending custom implementation * Export frame `PseudoOrder` settings * Reduce unnecessary Option packaging * v0.3.27 * fix(frame/headers): Fix error when headers priority is empty * v0.3.29 * feat(frame/headers): Packaging headers pseudo order type (hyperium#8) * feat(frame/settings): Packaging settings type (hyperium#9) * Initialize frame settings order in advance * v0.3.31 * feat(frame): Add unknown_setting frame settings (hyperium#10) * Add unknown_setting patch * Customize all Http Settings order * v0.3.40 * fix(frame): Fix unknown setting encode (hyperium#11) * v0.3.41 * feat: Replace with static settings (hyperium#12) * v0.3.50 * feat: Destructive update, fixed-length array records the setting frame order (hyperium#13) * v0.3.60 * Update README.md * Sync upstream (hyperium#14) * fix: streams awaiting capacity lockout (hyperium#730) (hyperium#734) This PR changes the the assign-capacity queue to prioritize streams that are send-ready. This is necessary to prevent a lockout when streams aren't able to proceed while waiting for connection capacity, but there is none. Closes hyperium/hyper#3338 Co-authored-by: dswij <[email protected]> * v0.3.23 * streams: limit error resets for misbehaving connections This change causes GOAWAYs to be issued to misbehaving connections which for one reason or another cause us to emit lots of error resets. Error resets are not generally expected from valid implementations anyways. The threshold after which we issue GOAWAYs is tunable, and will default to 1024. * Prepare v0.3.24 * perf: optimize header list size calculations (hyperium#750) This speeds up loading blocks in cases where we have many headers already. * v0.3.25 * refactor: cleanup new unused warnings (hyperium#757) * fix: limit number of CONTINUATION frames allowed Calculate the amount of allowed CONTINUATION frames based on other settings. max_header_list_size / max_frame_size That is about how many CONTINUATION frames would be needed to send headers up to the max allowed size. We then multiply by that by a small amount, to allow for implementations that don't perfectly pack into the minimum frames *needed*. In practice, *much* more than that would be a very inefficient peer, or a peer trying to waste resources. See https://seanmonstar.com/blog/hyper-http2-continuation-flood/ for more info. * v0.3.26 * fix: return a WriteZero error if frames cannot be written (hyperium#783) Some operating systems will allow you continually call `write()` on a closed socket, and will return `Ok(0)` instead of an error. This patch checks for a zero write, and instead of looping forever trying to write, returns a proper error. Closes hyperium#781 Co-authored-by: leibeiyi <[email protected]> * lints: fix unexpected cfgs warnings * ci: pin deps for MSRV * ci: pin more deps for MSRV job (hyperium#817) * fix: notify_recv after send_reset() in reset_on_recv_stream_err() to ensure local stream is released properly (hyperium#816) Similar to what have been done in fn send_reset<B>(), we should notify RecvStream that is parked after send_reset(). Co-authored-by: Jiahao Liang <[email protected]> --------- Co-authored-by: Sean McArthur <[email protected]> Co-authored-by: dswij <[email protected]> Co-authored-by: Noah Kennedy <[email protected]> Co-authored-by: beiyi lei <[email protected]> Co-authored-by: leibeiyi <[email protected]> Co-authored-by: Jiahao Liang <[email protected]> * v0.3.61 --------- Co-authored-by: Sean McArthur <[email protected]> Co-authored-by: dswij <[email protected]> Co-authored-by: Noah Kennedy <[email protected]> Co-authored-by: beiyi lei <[email protected]> Co-authored-by: leibeiyi <[email protected]> Co-authored-by: Jiahao Liang <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description:
I'm experiencing an issue with the h2 crate where the CPU usage reaches 100% under specific conditions. This issue is consistently reproducible and significantly affects the performance of my application.
Steps to Reproduce:
Expected Behavior:
The CPU usage should remain within normal operating limits and not spike to 100% when the remote client disconnects.
Actual Behavior:
When the remote client disconnects, the CPU usage spikes to 100% and remains there, causing severe performance degradation.
Environment:
Additional Context:
use instruments analyse cpu usage
![image](https://private-user-images.githubusercontent.com/1320897/335419098-4b5e9cd4-5171-4205-8b29-ffe659727bae.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkxMjQzNTUsIm5iZiI6MTczOTEyNDA1NSwicGF0aCI6Ii8xMzIwODk3LzMzNTQxOTA5OC00YjVlOWNkNC01MTcxLTQyMDUtOGIyOS1mZmU2NTk3MjdiYWUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIwOSUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMDlUMTgwMDU1WiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9MmE4Y2ExNjQ1YTI3NWQ5NTAzZWUwMDNiYjBlYjMxOWM1OWY3ZmMzYmJhMDgyZmQyYTgxNzM4OWMyNDA4NzY2OCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.TPS5wyK6rF7HVbiNCI904ZEE0gZ0M1be4ud1T8w3E8k)
Log Files/Output:
using trace log level, repeatedly print below log:
queued_data_frame=false
How fix it:
I have applied a preliminary fix which seems to mitigate the issue, as the high CPU usage has not reoccurred during my initial tests. However, this fix has not been fully validated, and I am unsure if there are any other side effects or impacts.
The text was updated successfully, but these errors were encountered: