-
-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
API causes slowdown in batch request handling #1707
Comments
This is not correct. vLLM automatically batches in-flight requests. It is built for the use case of high concurrency of requests. This means, when you are sending multiple individual requests, the underlying engine running in the server perform batching. |
Further illustrated here, hope the explanation is helpful: #1636 (comment) |
It is thank you for the elaborated answer. |
Ah one more thing, if you observing sequential behavior, try correct main branch instead of released version. Or turn on the flag This should be fixed as we work on #1677 |
@simon-mo I'm using asyncio.gather to approach the API (calling the acreate function) so AsyncLLMEngine should be able to handle the queries concurrently. However I am still experiencing semi-sequential behavior whereby requests get sequentially added to the queue with seconds delay in between. I'll try out the main branch. |
v0.2.2 was released last night. It should include the change. Please try it out and let us know! |
I'm on main branch (latest). I still notice 0.5 to 1 seconds between each request being added to the queue. Only after all requests have been added, they do execute concurrently. Is this expected behavior? |
INFO 11-21 01:07:22 async_llm_engine.py:370] Received request cmpl-78ae9b5f36b241c0b64131e838f2a85f etc... Because I have sent quite a large of requests to the API, no requests are processed by vllm for 14 seconds (no GPU load). |
Did you turn on |
@jpeig I have the same problem, when sending for example 10 request concurrently then vllm wait around 10 seconds to start generating output for each requests. If in the middle of generation I send a new request then all requests (generating output) stop until this new request is handled. @simon-mo I have used --engine-use-ray also, no changes. (api_server with stream) |
Yes that's the same behavior. I am using the OpenAI server. What about you? @jajj50386 |
|
Same issue here, I'm using the OpenAI API.
Version 0.2.2
That's pretty disappointing, just spent a few hours rewriting my code to send the requests in parallel and there is no speedup. The output that I get when I start the server:
|
Talking about disappointing, I also rewrote my application to be able to support vllm and concurrent requests as opposed of using exllama + my own API (without vllm). But @simon-mo is working on it. |
Sorry about the issue and we are treating it with high priority. We are in the process of reproducing the bug on different kinds of settings. As posted before, our original online tests have demonstrated full saturation with batching behavior. vLLM is designed for high throughput scenario for both online and offline scenarios. |
@simon-mo Thank you! really like all other aspects of vllm so far. If you need help reproducing it I'm happy to help. I attached the versions of the packages in my python env in case that helps: Some more output from the server:
|
When Vllm is running in API mode, I tried to make concurrent streaming calls, but some of the requests sent concurrently would wait for a considerable amount of time before receiving the results. I wanted to achieve a batch processing-like effect, where 4-8 concurrent data received could be uniformly processed without significant delays between them. What I did was to batch the received API requests and then concurrently open batch size AsyncLLMEngine inferences for a batch of data. From the actual results, this approach can indeed receive replies faster for all calls. However, I am not sure if this approach actually helps with the inference speed or if it is better to use the native API call directly. |
Any idea how long it might take to fix this or if there is a chance we can fix it ourselves? |
My conservative ETA is EOW (12/3). If you want to help look into as
well, more help the better!
On November 25, 2023, GitHub ***@***.***> wrote:
Any idea how long it might take to fix this or if there is a chance we
can fix it ourselves?
—
Reply to this email directly, view it on GitHub
<#1707 (comment)-
1826482378>, or unsubscribe
<https://github.com/notifications/unsubscribe-
auth/AFBD7A5WFRV677MDROGS6F3YGK46HAVCNFSM6AAAAAA7QCOZIWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRWGQ4DEMZXHA>.
You are receiving this because you were mentioned.Message ID: <vllm-
***@***.***>
|
Generating multiple completions in parallel also only works efficiently if there are no other requests. With other requests the completion time goes from ~10 seconds to ~120 seconds for n=30. |
Exactly! When I force add a new request by bypassing the API, I noticed that it works efficiently as well. That's why I initially assumed the default approach is to batch prompts in a single request (which wasn't supported). This insight may help resolve the issue. |
Ok I spent some times on different rabbit holes. The end conclusion is as following, you are seeing undesirable performance because vLLM's under-optimized support for AWQ models at the moment. I would recommend using the non-quantized version (and smaller if size doesn't fit) for now: not only you will get better accuracy, you will also get better performance. It still works for low throughput use case, delivering lower latency and memory savings. You should also see this warning in the output, what you are observing is the effect of this:
Currently vLLM process the allow prompt ("prefill") to "skip the line" of decoding cycles, so that we can further saturate the GPU utilization for later decoding stages by bringing in the new requests. However, due to the poor performance of AWQ, the prefill processing is very slow, therefore further slowing things down, letting you observing that batching is not in effect, and decoding is not in parallel. You can learn more about how vLLM's current approach compare here in Microsoft DeepSpeed's post. See more detail here: #1032 (comment), quoting @WoosukKwon
The root cause is vLLM doesn't have well-tuned AWQ CUDA kernels for different shapes and hardware. We are planning to experiment with Triton compilation for better kernel. The original kernels we adapted from AWQ repo is optimized for resource-constrained hardware like NVIDIA Jetson Orin. We will fix this, as quantization will be a vital part of the the LLM inference stack. However, creating optimized kernels for different hardware configuration is non-trivial. To address this, I'm updating the docs in #1883. I'm also starting to see whether bringing new newer version of AWQ kernel will have higher performance #1882. Lastly, the scheduling algorithms letting prefill to skip the line is not always the best approach, especially in the case of long prompt. We are working on getting a version of chunked prefill into vLLM as well. Finally, I want to thank your patience and support of vLLM, as we work through performance issues and bugs. |
Thank you for the your response but AWQ does not appear to be the issue. I tested 15 prompts without AWQ quantization, and I still get 0.5-1 second between handling each request. I can 'fix' the issue by not using the API and directly adding the requests - as @tom-doerr has said. With a batch of 15 prompts, I experience a slowdown of roughly 10 seconds because of this. So this is not an AWQ issue but an API / request handling issue. |
@yungangwu could you share the code? |
Can you share the following so I can reproduce this? I have working with the assumption of all AWQ models vs regular llama2-7b-chat as comparison points.
In the current state of batching algorithm, in the absence of bug, the 0.5-1 second might be the time it takes to perform the prefill for one request. This is roughly the same it takes to process 1000-2000 tokens depending on your hardware. In more detail, the algorithm is (and mentioned before, pending improvements):
What you mentioned could be the following case:
You can test whether this is the case by checking (1) average generation throughput in the log (2) test for a single request, their time to first token generated (by setting max_output_tokens=1). The reason manually adding them works is so that requests are guaranteed to be prefilled together, instead of time apart. When they are prefilled together, latency of the entire batch only adds small overhead compare to latency of a single request. The final solution to this will be for vLLM to implement chunked prefill. But I think there might be a way to encourage a batch of prefill requests in the AsyncLLMEngine, let me see... |
I have highlighted below the main problem that I see. When you stop decoding because a new request arrives, it can result in a slowdown in speed. Especially if you are working with large prompts. I think this is what the Dynamic SplitFuse from MII was supposed to address - essentially splitting a large prompt into multiple pieces to process them faster.
|
Model: Using a lot of different models. Mostly mistral based. Lately I have been using OpenHermes-2.5 both with and without AWQ. Hardware: 3090RTX, AMD Ryzen CPU Length of the prompt: around 1000 tokens Length decode (output length): around 3000 tokens Blocks available: INFO 12-04 11:03:49 llm_engine.py:219] # GPU blocks: 3092, # CPU blocks: 2048 Request:
Payload:
I am using LM format enforcer to enforce the output to proper JSON. This should not affect the handling of the requests.
I can literally hear my GPU not doing anything for about 10 seconds as requests are first handled sequentially. |
@jpeig, the LM format enforcer bit is good hint. Given the low generation throughput, I'm suspecting this performance bug, which they just fixed recently: |
Could the format enforcer slow down all requests or just when format is used? |
Currently format enforcer usage is per sequence (using vLLM's logits_processors api) so I believe you can turn it on and off, depending on your workload. |
Unrelated to the issue, but would be great to get parser support over the API |
Hi , I am new to this vLLM, I need to make batch calls in vllm Is vllm has native support for this? and if so is this good approach as like sending individual request concurrently? thanks in advance |
So is there any updates for this issue? |
Hi ! |
same in Qwen2-7B-Instruct-AWQ with 1 * RTX2080ti-22G. offline mode is 3X fast than vllm.entrypoints.openai.api_server with same config: offline mode log:
api_server mode log:
|
+1 |
I have this same issue, but it is only on one of my servers. The other is fine. On the bad server it drops off like this periodically:
I did |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
+1 |
+1 |
We have made a lot of progress on the API server in the past year. Please open a new issue with more specifics if needed. |
Using the API server and submitting multiple prompts to take advantage of speed benefit returns the following error:
"multiple prompts in a batch is not currently supported"
What's the point of vLLM without being able to send batches to the API?
Of course, I can send multiple seperate requests, but those are handled sequentially and do not benefit from speed improvements.
Correct me if I'm wrong...
The text was updated successfully, but these errors were encountered: