-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][Frontend] Coalesce bunched RequestOutput
s
#12298
Conversation
Under high load it's possible for the frontend per-request asyncio queues to back up, with the next token(s) arriving before existing ones are streamed back to the user. In this case there's no reason for them to be emitted as separate outputs in subsequent iterations. By concatenating them into a single output it reduces the number of tasks / context switches / response messages and means those additional "ready" tokens should reach the user faster. Signed-off-by: Nick Hill <[email protected]> Co-authored-by: Robert Shaw <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
This pull request has merge conflicts that must be resolved before it can be |
# Conflicts: # vllm/v1/engine/async_llm.py
@njhill What's the status of this PR? Would it be possible to merge this PR asap? Also, could you provide a performance benchmark? In which cases would this PR help most? |
@WoosukKwon it's ready but I am looking into unexpected CI test failure (unclear why this change would result in a CUDA OOM). I will also run some more benchmarks. |
Signed-off-by: Nick Hill <[email protected]>
ddd304d
to
3ff92d0
Compare
Signed-off-by: Nick Hill <[email protected]>
Tests should now be fixed. |
# Coalesce any additional queued outputs | ||
while not q.empty(): | ||
next_out = q.get_nowait() | ||
if sampling_params.output_kind == RequestOutputKind.DELTA: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be in a future PR, but we should check for invalid sampling params like n > 1
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I guess that's a general V1 thing, we should be explicitly rejecting requests that include unsupported parameters if we aren't already.
Signed-off-by: Nick Hill <[email protected]> Co-authored-by: Robert Shaw <[email protected]>
Under high load it's possible for the frontend per-request asyncio queues to back up, with the next token(s) arriving before existing ones are streamed back to the user.
In this case there's no reason for them to be emitted as separate outputs in subsequent iterations. By concatenating them into a single output it reduces the number of tasks / context switches / response messages to be handled and means those additional "ready" tokens should reach the user faster.
Benchmarking shows a small benefit to latencies, TTFT in particular.
On an A100:
Before
After