-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drivers,server: Cap data read for a seg #1827
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After the update to start returning an error if the byte limit is exceeded while reading a request body, let's make sure to test for that error. I recommend small unit test cases as those should make it easy to construct request bodies that exceed the byte limit.
@yondonfu @jailuthra @darkdarkdragon I feel medium-strongly this should be a configurable parameter with a default value, perhaps Just feels like the kind of thing I'd like the ability to do in a production setting to unbreak an integration that's ever-so-slightly over the limit. |
@iameli we also using this size on the O side. If we make this configurable then we should put this value in discovery protocol so O will clearly advertise its limits - or else B can get unexpected errors. |
Sure, that's reasonable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add some small tests for the new limiting behavior?
979451e
to
e729d49
Compare
Latest changes look good to go for me pending the tests for the byte reading limiting behavior. |
e729d49
to
3aebd37
Compare
3aebd37
to
397380f
Compare
Added a couple of tests in 9d23156, rebased the PR and added a changelog entry. Also created a fork debt issue for making the max segment size configurable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Fixed the comments, and got another approval too.
What does this pull request do? Explain your changes. (required)
Cap the data read for a segment to prevent bad segments eating up a lot of memory.
Specific updates (required)
see the commit history
Currently the max segment size is set to ~300MB (at 8000kbps using the current max duration of 5minutes)
How did you test each of these updates (required)
With Limit Set:
B:
I0404 22:11:10.044429 2542338 broadcast.go:372] Processing segment nonce=17696729379380681746 manifestID=test seqNo=1 dur=2 bytes=2097152
OT:
[h264 @ 0x7f810403d640] error while decoding MB 60 59, bytestream -12
Transcoded Rendition Duration: 1.23 s
Without Limit Set:
B:
I0404 22:12:42.698367 2543964 broadcast.go:372] Processing segment nonce=13032173926343571666 manifestID=test seqNo=1 dur=2 bytes=2405084
OT: No error
Transcoded Rendition Duration: 2.7 s
Does this pull request close any open issues?
Fixes #1523
Checklist:
make
runs successfully./test.sh
pass