Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CPU] Remove the limitation that requires to memset zero for KVCache of PagedAttention #28681

Merged

Conversation

luo-cheng2021
Copy link
Contributor

@luo-cheng2021 luo-cheng2021 commented Jan 26, 2025

Details:

  • Remove the limitation that requires to memset zero for KVCache of PagedAttention
    • Performance impact estimation for padding zero:
      • only affect first token
      • memset cost for worst case for llama2-7b like(header number=32, head size=128, precision=f16, block size=32, layer number=32, memory speed=50GB/s):
        1(batch number)*32(header number)*31(token number to pad)*128(header size)*2(f16 precision)*2(k+v)*32(layer number)=16.25M, the cost will be 16.25M/(50GB/s)=0.3ms, it should have small impact comparing to the cost of first token which typically is hundreds or thousands of milliseconds.
  • Potential incorrect result bug if head size is not multiple of 16

Tickets:

@@ -2356,6 +2356,28 @@ struct AttentionExecutor : public PagedAttentionExecutor {
_slot_mapping.ptr<int32_t>()[idx++] =
block_number * _helper._block_size + block_offset % _helper._block_size;
}
// To simplify tails of the kernels for Q*K and W*V:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a WA for first-token-kernels which couldn't correctly support tails, matmul(attn_socre, value) to be exact, why this WA not near the code of kernels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be useful if padding zero logic is merged into exec_loop_mixed/pack kv, but if merged the workitem will be reduced from header number*zero tokens to header number. So keep it here still reasonable since here is the centralized logic to handle the destination kvcache.

@luo-cheng2021 luo-cheng2021 requested a review from usstq February 7, 2025 08:04
Copy link
Contributor

@usstq usstq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

// W*V aka [m, k1] * [n1, k1]', there is no tails handing for n1, so tails of v_cache need to be set to
// zero.
// for second token, the kernels have tails handling logic
if (q_len != 1 && kv_len % _helper._block_size != 0) {
Copy link
Contributor

@dmitry-gorokhov dmitry-gorokhov Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So in serving scenarios (where prompt processing is interleaved with second token generation) and in beam-search or speculative decoding cases (where even seconf token processed with q != 1) we will have memsets on each iteration?

@dmitry-gorokhov dmitry-gorokhov added this pull request to the merge queue Feb 11, 2025
Merged via the queue into openvinotoolkit:master with commit a6cdc76 Feb 11, 2025
181 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: CPU OpenVINO CPU plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants