-
-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix (#315): correctly increment this.cer #317
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -63,6 +63,25 @@ test('client calls a https server twice', (t) => { | |
}) | ||
}) | ||
|
||
test('client calculates correct duration when using pipelining', (t) => { | ||
t.plan(4) | ||
|
||
const lazyServer = helper.startServer({ delayResponse: 500 }) | ||
const opts = lazyServer.address() | ||
opts.pipelining = 2 | ||
const client = new Client(opts) | ||
let count = 0 | ||
|
||
client.on('response', (statusCode, length, duration) => { | ||
t.equal(statusCode, 200, 'status code matches') | ||
t.ok(duration > 500 && duration < 600) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is probably going to make it a brittle test at some point. I understand that it's tricky to test though, but relying on timing in tests is something I would try to avoid. I am not sure how to do it though, apart from changing the way the test is written altogether There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We're doing such time based assertions in a lot of places in these tests, so I don't think this is any less appropriate than that. But if you have any clever idea on how to improve, I'd be happy to hear it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not really, I think the only improvement to reduce possible brittleness is to only assert that it will take longer than 500, because setTimeout is guaranteed to trigger after and not before the specified time There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. our calculations were giving ~half results, so not checking if it takes less than 600ms might not catch if the calculations in future break and start giving 2x time. We can increase it to 700 if builds start to fail. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. fair enough! |
||
|
||
if (++count === 2) { | ||
client.destroy() | ||
} | ||
}) | ||
}) | ||
|
||
test('client calls a tls server without SNI servername twice', (t) => { | ||
t.plan(4) | ||
|
||
|
@@ -891,17 +910,17 @@ test('client invokes appropriate onResponse when using pipelining', (t) => { | |
t.deepEqual(responses, ['POST']) | ||
break | ||
case 2: | ||
// 3rd was sent as 1st is finished, receiving 2st | ||
// 3rd was sent as 1st is finished, receiving 2nd | ||
t.same(client.getRequestBuffer().toString(), makeResponseFromBody({ server, method: 'PUT' }), 'current should be third request') | ||
t.deepEqual(responses, ['POST', 'GET']) | ||
break | ||
case 3: | ||
// 1st was resent, receiving 3st | ||
// 1st was resent, receiving 3rd | ||
t.same(client.getRequestBuffer().toString(), makeResponseFromBody({ server, method: 'POST' }), 'current should be first request') | ||
t.deepEqual(responses, ['POST', 'GET', 'PUT']) | ||
break | ||
case 4: | ||
// 2st was resent, receiving 1st | ||
// 2nd was resent, receiving 1st | ||
t.same(client.getRequestBuffer().toString(), makeResponseFromBody({ server, method: 'GET' }), 'current should be second request') | ||
t.deepEqual(responses, ['POST', 'GET', 'PUT', 'POST']) | ||
client.destroy() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rpi = request pipelining index
cer = current expected response
Separated
rpi
andcer
because all pipelining requests are fired at once without waiting for response and the following happens (with pipelining = 2)cer
is 0_doRequest
with incrementedcer
cer
is now 1_doRequest
tries to fire the same request again, hence re-setting the startTime to now which messes up theduration
calculation when response comes.Separating
rip
ensures that after getting response for first request, we fire first one again because second is already running right now.Does it make sense? 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally, put it in a comment ;).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this code was written at a time when I understood function character length had a direct impact on it being optimised. Newer versions of v8 are not limited by this afaik. If I was refactoring this code now I would recommend rewriting the variables as their full names, and adding many, many more comments.
rpi
=requestPipeliningIndex
cer
=currentExpectedResponse
After looking back at this code for the first time in years, I was struggling to understand the variable meanings myself. I would even begin to rename many functions (even anonymous ones), to give the code more contextual meaning. I'd aim to make it readable/understandable first, then measure the impact. My biggest mistake writing the code in autocannon was not doing that during initial implementations, so I'm advising to learn from that 😅
I am happy to refactor this in a follow up PR, if you'd like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@salmanm your explanation above makes sense to me, I just want to make sure there isn't an implicit assumption in this statement:
This is not assuming that requests will take similar times to execute, right? In other words, this supports the case whereby, with pipelining 2: