Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vlllm stops generation when pending request > 0 #1734

Closed
NaCloudAI opened this issue Nov 21, 2023 · 4 comments
Closed

vlllm stops generation when pending request > 0 #1734

NaCloudAI opened this issue Nov 21, 2023 · 4 comments
Labels
stale Over 90 days of inactivity

Comments

@NaCloudAI
Copy link

It seems that under the AWQ , each input temporarily halts VLLM operations for a few seconds, either the generation throughput drops to zero or the prompt throughput does, indicating that the system is not simultaneously processing the prompt and generating output.

In an instance with 10 requests, each containing approximately 500 prompt tokens, the system takes around 20 seconds to process the prompt before starting any generation. Even after it processed all pending requests, If a new request arrives during this time, it interrupts the ongoing generation process to handle the new prompt.

11-21 00:12:08 llm_engine.py:624] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 122.2 tokens/s, Running: 20 reqs, Swapped: 0 reqs, Pending: 9 reqs, GPU KV cache usage: 24.8%, CPU KV cache usage: 0.0%
INFO 11-21 00:12:13 llm_engine.py:624] Avg prompt throughput: 1032.4 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 22 reqs, Swapped: 0 reqs, Pending: 7 reqs, GPU KV cache usage: 27.1%, CPU KV cache usage: 0.0%
INFO 11-21 00:12:18 llm_engine.py:624] Avg prompt throughput: 1096.2 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 24 reqs, Swapped: 0 reqs, Pending: 5 reqs, GPU KV cache usage: 29.4%, CPU KV cache usage: 0.0%

This pattern suggests that the AWQ protocol prioritizes processing new prompts over continuing ongoing generations when faced with multiple requests, and preprocessing each prompt is slow.

Is this expected?

@NaCloudAI NaCloudAI changed the title vlllm stops generation upon input vlllm stops generation when pending request > 0 Nov 21, 2023
@jpeig
Copy link

jpeig commented Nov 22, 2023

@simon-mo same bug.

#1707

@thomasfloqs
Copy link

Hello, is there any update on this?
It would be super useful for us!

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Oct 30, 2024
Copy link

github-actions bot commented Dec 1, 2024

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Over 90 days of inactivity
Projects
None yet
Development

No branches or pull requests

3 participants