You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that under the AWQ , each input temporarily halts VLLM operations for a few seconds, either the generation throughput drops to zero or the prompt throughput does, indicating that the system is not simultaneously processing the prompt and generating output.
In an instance with 10 requests, each containing approximately 500 prompt tokens, the system takes around 20 seconds to process the prompt before starting any generation. Even after it processed all pending requests, If a new request arrives during this time, it interrupts the ongoing generation process to handle the new prompt.
This pattern suggests that the AWQ protocol prioritizes processing new prompts over continuing ongoing generations when faced with multiple requests, and preprocessing each prompt is slow.
Is this expected?
The text was updated successfully, but these errors were encountered:
NaCloudAI
changed the title
vlllm stops generation upon input
vlllm stops generation when pending request > 0
Nov 21, 2023
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
It seems that under the AWQ , each input temporarily halts VLLM operations for a few seconds, either the generation throughput drops to zero or the prompt throughput does, indicating that the system is not simultaneously processing the prompt and generating output.
In an instance with 10 requests, each containing approximately 500 prompt tokens, the system takes around 20 seconds to process the prompt before starting any generation. Even after it processed all pending requests, If a new request arrives during this time, it interrupts the ongoing generation process to handle the new prompt.
This pattern suggests that the AWQ protocol prioritizes processing new prompts over continuing ongoing generations when faced with multiple requests, and preprocessing each prompt is slow.
Is this expected?
The text was updated successfully, but these errors were encountered: