Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1][Frontend] Coalesce bunched RequestOutputs #12298

Merged
merged 6 commits into from
Jan 24, 2025

Conversation

njhill
Copy link
Member

@njhill njhill commented Jan 22, 2025

Under high load it's possible for the frontend per-request asyncio queues to back up, with the next token(s) arriving before existing ones are streamed back to the user.

In this case there's no reason for them to be emitted as separate outputs in subsequent iterations. By concatenating them into a single output it reduces the number of tasks / context switches / response messages to be handled and means those additional "ready" tokens should reach the user faster.

Benchmarking shows a small benefit to latencies, TTFT in particular.

On an A100:

VLLM_USE_V1=1 vllm serve meta-llama/Llama-3.2-1B-Instruct --disable-log-requests --port 8001 --max-num-batched-tokens 2048 --no-enable-prefix-caching --uvicorn-log-level=error 
    python3 benchmark_serving.py \
        --model meta-llama/Llama-3.2-1B-Instruct \
        --dataset-name sharegpt \
        --dataset-path /workspace/ShareGPT_V3_unfiltered_cleaned_split.json \
        --port 8001 \
        --ignore-eos \
        --num-prompts 6000 \
        --request-rate 60 \
        --backend vllm

Before

============ Serving Benchmark Result ============
Successful requests:                     6000      
Benchmark duration (s):                  107.69    
Total input tokens:                      1322102   
Total generated tokens:                  1205048   
Request throughput (req/s):              55.72     
Output token throughput (tok/s):         11190.46  
Total Token throughput (tok/s):          23467.91  
---------------Time to First Token----------------
Mean TTFT (ms):                          56.86     
Median TTFT (ms):                        47.85     
P99 TTFT (ms):                           128.49    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          10.03     
Median TPOT (ms):                        8.55      
P99 TPOT (ms):                           17.88     
---------------Inter-token Latency----------------
Mean ITL (ms):                           9.88      
Median ITL (ms):                         8.59      
P99 ITL (ms):                            24.48     
==================================================

After

============ Serving Benchmark Result ============
Successful requests:                     6000      
Benchmark duration (s):                  107.34    
Total input tokens:                      1322102   
Total generated tokens:                  1205048   
Request throughput (req/s):              55.90     
Output token throughput (tok/s):         11226.66  
Total Token throughput (tok/s):          23543.84  
---------------Time to First Token----------------
Mean TTFT (ms):                          51.97     
Median TTFT (ms):                        45.87     
P99 TTFT (ms):                           115.81    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          9.44      
Median TPOT (ms):                        8.41      
P99 TPOT (ms):                           15.95     
---------------Inter-token Latency----------------
Mean ITL (ms):                           9.52      
Median ITL (ms):                         8.28      
P99 ITL (ms):                            24.00     
==================================================

Under high load it's possible for the frontend per-request asyncio queues to back up, with the next token(s) arriving before existing ones are streamed back to the user.

In this case there's no reason for them to be emitted as separate outputs in subsequent iterations. By concatenating them into a single output it reduces the number of tasks / context switches / response messages and means those additional "ready" tokens should reach the user faster.

Signed-off-by: Nick Hill <[email protected]>

Co-authored-by: Robert Shaw <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 22, 2025
Copy link

mergify bot commented Jan 22, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @njhill.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jan 22, 2025
# Conflicts:
#	vllm/v1/engine/async_llm.py
@mergify mergify bot removed the needs-rebase label Jan 22, 2025
@WoosukKwon
Copy link
Collaborator

@njhill What's the status of this PR? Would it be possible to merge this PR asap?

Also, could you provide a performance benchmark? In which cases would this PR help most?

@njhill
Copy link
Member Author

njhill commented Jan 23, 2025

@WoosukKwon it's ready but I am looking into unexpected CI test failure (unclear why this change would result in a CUDA OOM).

I will also run some more benchmarks.

Signed-off-by: Nick Hill <[email protected]>
@njhill
Copy link
Member Author

njhill commented Jan 23, 2025

Tests should now be fixed.

@njhill njhill mentioned this pull request Jan 23, 2025
7 tasks
# Coalesce any additional queued outputs
while not q.empty():
next_out = q.get_nowait()
if sampling_params.output_kind == RequestOutputKind.DELTA:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be in a future PR, but we should check for invalid sampling params like n > 1 here

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I guess that's a general V1 thing, we should be explicitly rejecting requests that include unsupported parameters if we aren't already.

@mgoin mgoin added performance Performance-related issues frontend labels Jan 23, 2025
@njhill njhill merged commit 24b0205 into vllm-project:main Jan 24, 2025
46 checks passed
@njhill njhill deleted the coalesce-stream branch January 24, 2025 01:17
LucasWilkinson pushed a commit to neuralmagic/vllm that referenced this pull request Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants