Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] Fix missing seq_start_loc in xformers prefill metadata #12464

Merged
merged 1 commit into from
Jan 27, 2025

Conversation

Isotr0py
Copy link
Collaborator

@Isotr0py Isotr0py commented Jan 27, 2025

Phi-3-small with blocksparse attention is not working with xformers attention prefill metadata because of missing seq_start_loc:

Logs
[rank0]: Traceback (most recent call last):
[rank0]:   File "/kaggle/working/vllm/vllm/worker/model_runner_base.py", line 116, in _wrapper
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/worker/model_runner.py", line 1718, in execute_model
[rank0]:     hidden_or_intermediate_states = model_executable(
[rank0]:                                     ^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/model_executor/models/phi3_small.py", line 445, in forward
[rank0]:     output_hidden_states = self.model(
[rank0]:                            ^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/model_executor/models/phi3_small.py", line 358, in forward
[rank0]:     hidden_states = layer(
[rank0]:                     ^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/model_executor/models/phi3_small.py", line 290, in forward
[rank0]:     hidden_states = self.self_attn(
[rank0]:                     ^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/model_executor/models/phi3_small.py", line 249, in forward
[rank0]:     attn_output = self.attn(q, k, v, kv_cache, attn_metadata=attn_metadata)
[rank0]:                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/attention/layer.py", line 177, in forward
[rank0]:     return torch.ops.vllm.unified_attention(
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/_ops.py", line 1116, in __call__
[rank0]:     return self._op(*args, **(kwargs or {}))
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/attention/layer.py", line 271, in unified_attention
[rank0]:     return self.impl.forward(self, query, key, value, kv_cache, attn_metadata)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/attention/backends/blocksparse_attn.py", line 424, in forward
[rank0]:     output = self.bs_attn(
[rank0]:              ^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank0]:     return self._call_impl(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
[rank0]:     return forward_call(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/attention/ops/blocksparse_attention/interface.py", line 223, in forward
[rank0]:     return self.spda(
[rank0]:            ^^^^^^^^^^
[rank0]:   File "/kaggle/working/vllm/vllm/attention/ops/blocksparse_attention/interface.py", line 185, in spda
[rank0]:     cu_seqlens = cu_seqlens_k.cpu()
[rank0]:                  ^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'NoneType' object has no attribute 'cpu'
  • This PR fixes the missing seq_start_loc in xformers prefill metadata.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing! cc @WoosukKwon

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) January 27, 2025 05:51
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 27, 2025
@DarkLight1337 DarkLight1337 merged commit 372bf08 into vllm-project:main Jan 27, 2025
64 checks passed
@Isotr0py Isotr0py deleted the fix-xformers branch January 27, 2025 07:26
tjtanaa added a commit to EmbeddedLLM/vllm that referenced this pull request Jan 27, 2025
* [Misc] Use VisionArena Dataset for VLM Benchmarking (vllm-project#12389)

Signed-off-by: Roger Wang <[email protected]>

* [ci/build] fix wheel size check (vllm-project#12396)

Signed-off-by: youkaichao <[email protected]>

* [Hardware][Gaudi][Doc] Add missing step in setup instructions (vllm-project#12382)

* [ci/build] sync default value for wheel size (vllm-project#12398)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Enable proxy support in benchmark script (vllm-project#12356)

Signed-off-by: Junichi Sato <[email protected]>

* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build (vllm-project#12375)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Misc] Remove deprecated code (vllm-project#12383)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). (vllm-project#12405)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Bugfix][Kernel] Fix moe align block issue for mixtral (vllm-project#12413)

* [Bugfix] Fix BLIP-2 processing (vllm-project#12412)

Signed-off-by: DarkLight1337 <[email protected]>

* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 (vllm-project#12408)

Signed-off-by: Divakar Verma <[email protected]>

* [Misc] Add FA2 support to ViT MHA layer (vllm-project#12355)

Signed-off-by: Isotr0py <[email protected]>

* [TPU][CI] Update torchxla version in requirement-tpu.txt (vllm-project#12422)

Signed-off-by: Siyuan Liu <[email protected]>

* [Misc][Bugfix] FA3 support to ViT MHA layer (vllm-project#12435)

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (vllm-project#12094)

Signed-off-by: Keyun Tong <[email protected]>

* [V1][Bugfix] Fix assertion when mm hashing is turned off (vllm-project#12439)

Signed-off-by: Roger Wang <[email protected]>

* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 (vllm-project#12445)

* [Frontend] generation_config.json for  maximum tokens(vllm-project#12242)

Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>

* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 (vllm-project#12417)

Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [Bugfix/CI] Fix broken kernels/test_mha.py (vllm-project#12450)

* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 (vllm-project#12434)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Build/CI] Fix libcuda.so linkage (vllm-project#12424)

Signed-off-by: Tyler Michael Smith <[email protected]>

* [Frontend] Rerank API (Jina- and Cohere-compatible API)  (vllm-project#12376)

Signed-off-by: Kyle Mistele <[email protected]>

* [DOC] Add link to vLLM blog (vllm-project#12460)

Signed-off-by: Yuan Tang <[email protected]>

* [V1] Avoid list creation in input preparation (vllm-project#12457)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Frontend] Support scores endpoint in run_batch (vllm-project#12430)

Signed-off-by: Pooya Davoodi <[email protected]>

* [Bugfix] Fix Granite 3.0 MoE model loading (vllm-project#12446)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata (vllm-project#12464)

Signed-off-by: Isotr0py <[email protected]>

* [V1][Minor] Minor optimizations for update_from_output (vllm-project#12454)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix] Fix gpt2 GGUF inference (vllm-project#12467)

Signed-off-by: Isotr0py <[email protected]>

---------

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
tjtanaa pushed a commit to EmbeddedLLM/vllm that referenced this pull request Jan 28, 2025
rasmith pushed a commit to rasmith/vllm that referenced this pull request Jan 30, 2025
Isotr0py added a commit to Isotr0py/vllm that referenced this pull request Feb 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants