Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized vecmat ukernel tile functions for i16 x u4 -> i32 on AVX-512-VNNI #15525

Merged
merged 1 commit into from
Nov 10, 2023

Conversation

bjacob
Copy link
Contributor

@bjacob bjacob commented Nov 10, 2023

This kernel is parametrized in N0, allowing N0==16 and N0==32. Performance on AMD Ryzen 9 7950X3D:

  • With N0=16: 180 Gop/s.
  • With N0=32: 240 Gop/s.

These numbers show that there's a nice reward for going extra large, but that's also a liability for vecmat shapes whose N dimension isn't a multiple of 32. Maybe we can keep both for now.

This is currently by far our fastest vecmat tile function --- it's fast even by general-matmul standards, while usually vecmat's low arithmetic intensity relegates it to lower performance levels. It shows what's possible now that we've decoupled vecmat tile shapes from general matmul tile shapes in #15431 . That 32x8 is not a truncation of a general matmul tile shape. Other element types and CPU architectures all need to get the same treatment.

The idea of this kernel is to split the LHS s16 values into high and low 8-bit components to be able to use _mm512_dpbusd_epi32.

In itself, that doesn't reduce the number of arithmetic instructions: while each now computes a 4D dot-product instead of a 2D one as in _mm512_dpwssd_epi32, we now need twice more of them to do separately the high and low 8bit parts of the LHS s16 values.

The real benefit is that this removes the need to extend RHS u4 values to s16. Since this is a vecmat kernel, the LHS is small and the RHS is big, so it matters to avoid RHS-processing work.

It's not trivial how to use _mm512_dpbusd_epi32, with its quirky unsigned * signed semantics. We take advantage of the fact that our u4 RHS values, when extended to u8, do not use the top bit -- so they are also interpretable as s8 values in place. So this is specific to RHS being less-than-8-bit values (it's not specific beyond that to 4bit). Meanwhile, when we split the LHS s16 values into high and low 8bit components the high 8bits are signed s8 and the low 8bit are unsigned u8. So, for each of the combinations of operands that we have to feed _mm512_dpbusd_epi32, we manage to find an operand order that accomodates the instruction's requirements on signednesses.

@bjacob bjacob requested a review from Max191 November 10, 2023 04:57
@bjacob bjacob marked this pull request as ready for review November 10, 2023 04:57
@bjacob bjacob requested a review from benvanik as a code owner November 10, 2023 04:57
@bjacob
Copy link
Contributor Author

bjacob commented Nov 10, 2023

objdump of the inner loop, N0==32 kernel. The vpdpbusd are the useful arithmetic instructions, so we want the inner loop to spend as much of its time as possible in those.

   287f0:       c5 fa 6f 1c 0e          vmovdqu xmm3,XMMWORD PTR [rsi+rcx*1]
   287f5:       c4 c2 61 00 e5          vpshufb xmm4,xmm3,xmm13
   287fa:       c4 c2 61 00 ee          vpshufb xmm5,xmm3,xmm14
   287ff:       c4 c2 61 00 ff          vpshufb xmm7,xmm3,xmm15
   28804:       62 b2 65 08 00 d8       vpshufb xmm3,xmm3,xmm16
   2880a:       62 e1 fe 48 6f 0c ca    vmovdqu64 zmm17,ZMMWORD PTR [rdx+rcx*8]
   28811:       62 e1 fe 48 6f 54 ca    vmovdqu64 zmm18,ZMMWORD PTR [rdx+rcx*8+0x40]
   28818:       01 
   28819:       62 b1 65 40 71 d1 04    vpsrlw zmm19,zmm17,0x4
   28820:       62 b1 5d 40 71 d2 04    vpsrlw zmm20,zmm18,0x4
   28827:       62 f2 fd 48 59 e4       vpbroadcastq zmm4,xmm4
   2882d:       62 f2 fd 48 59 ed       vpbroadcastq zmm5,xmm5
   28833:       62 f2 fd 48 59 ff       vpbroadcastq zmm7,xmm7
   28839:       62 f2 fd 48 59 db       vpbroadcastq zmm3,xmm3
   2883f:       62 c1 75 40 db c9       vpandd zmm17,zmm17,zmm9
   28845:       62 b2 5d 48 50 c9       vpdpbusd zmm1,zmm4,zmm17
   2884b:       62 f2 75 40 50 d5       vpdpbusd zmm2,zmm17,zmm5
   28851:       62 c1 65 40 db c9       vpandd zmm17,zmm19,zmm9
   28857:       62 b2 45 48 50 f1       vpdpbusd zmm6,zmm7,zmm17
   2885d:       62 72 75 40 50 d3       vpdpbusd zmm10,zmm17,zmm3
   28863:       62 c1 6d 40 db c9       vpandd zmm17,zmm18,zmm9
   28869:       62 b2 5d 48 50 c1       vpdpbusd zmm0,zmm4,zmm17
   2886f:       62 72 75 40 50 dd       vpdpbusd zmm11,zmm17,zmm5
   28875:       62 d1 5d 40 db e1       vpandd zmm4,zmm20,zmm9
   2887b:       62 72 45 48 50 c4       vpdpbusd zmm8,zmm7,zmm4
   28881:       62 72 5d 48 50 e3       vpdpbusd zmm12,zmm4,zmm3
   28887:       48 83 c1 10             add    rcx,0x10
   2888b:       48 ff c8                dec    rax
   2888e:       0f 85 5c ff ff ff       jne    287f0 <iree_uk_mmt4d_tile_s16u4s32_1x32x8_x86_64_avx512_vnni+0x70>

@bjacob bjacob force-pushed the ukernel-s16u4-x86-vnni branch from 0185b15 to f7ad094 Compare November 10, 2023 14:47
Copy link
Contributor

@Max191 Max191 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty awesome! The high/low bit separation trick is a really cool optimization. Nice work 😄

@bjacob bjacob merged commit f458a88 into iree-org:main Nov 10, 2023
bjacob added a commit that referenced this pull request Nov 10, 2023
As noted in #15525 , it performs about a third faster, and our primary
use case in #15158 has no padding problem with that wider size.
bjacob added a commit that referenced this pull request Nov 14, 2023
* Consistently compare with/without skipping of intermediate roundings.
A catch is that the ukernel may fall back to a generic code path (and
that fallback is consistently exercised by the test, even when a
non-fallback path is also available and tested). And generic code paths
("tile functions") never skipped intermediate roundings, even if allowed
to by the flag. This caused complicated test code retrying again on
error. This PR simply adds the skipping-intermediate-roundings generic
tile functions, so the test code is simpler, and concretely I just
needed that for #15543 as I'm adding bf16-accumulator tile functions
that are skipping intermediate roundings.
* I had to also update `iree-e2e-matmul-test` to switch to skipping
intermediate roundings. Unlike the ukernels' own tests, which really
must test both flavors, in `iree-e2e-matmul-test` we are e2e testing
what the compiler produces, and that is skippig intermediate roundings
at least by default, and while that could be overridden with
`--iree-llvmcpu-skip-intermediate-roundings=false`, we don't currently
test that in e2e matmul tests.
* Generate better random test input values. Some were too large - when
we generate random bfloat16 to accumulate into bfloat16, they better be
very small as we don't want to grow accumulators to the point where they
would start rounding. It's OK, because bfloat16 kernels use bfloat16
arithmetic instructions, not bit hacks, so correctness is sufficiently
tested on very small values. Conversely, for int8/int16 test input
values, we were generating a very narrow range and that was potentially
missing important coverage as some of our int kernels are starting to do
evil bit hacks (#15525).
ramiro050 pushed a commit to ramiro050/iree that referenced this pull request Dec 19, 2023
…512-VNNI (iree-org#15525)

This kernel is parametrized in N0, allowing N0==16 and N0==32.
Performance on AMD Ryzen 9 7950X3D:
  - With N0=16:  180 Gop/s.
  - With N0=32:  240 Gop/s.

These numbers show that there's a nice reward for going extra large, but
that's also a liability for vecmat shapes whose N dimension isn't a
multiple of 32. Maybe we can keep both for now.

This is currently by far our fastest vecmat tile function --- it's fast
even by general-matmul standards, while usually vecmat's low arithmetic
intensity relegates it to lower performance levels. It shows what's
possible now that we've decoupled vecmat tile shapes from general matmul
tile shapes in iree-org#15431 . That 32x8 is not a truncation of a general
matmul tile shape. Other element types and CPU architectures all need to
get the same treatment.

The idea of this kernel is to split the LHS s16 values into high and low
8-bit components to be able to use `_mm512_dpbusd_epi32`.

In itself, that doesn't reduce the number of arithmetic instructions:
while each now computes a 4D dot-product instead of a 2D one as in
`_mm512_dpwssd_epi32`, we now need twice more of them to do separately
the high and low 8bit parts of the LHS s16 values.

The real benefit is that this removes the need to extend RHS u4 values
to s16. Since this is a vecmat kernel, the LHS is small and the RHS is
big, so it matters to avoid RHS-processing work.

It's not trivial how to use `_mm512_dpbusd_epi32`, with its quirky
unsigned * signed semantics. We take advantage of the fact that our u4
RHS values, when extended to u8, do not use the top bit -- so they are
also interpretable as s8 values in place. So this is specific to RHS
being less-than-8-bit values (it's not specific beyond that to 4bit).
Meanwhile, when we split the LHS s16 values into high and low 8bit
components the high 8bits are signed s8 and the low 8bit are unsigned
u8. So, for each of the combinations of operands that we have to feed
`_mm512_dpbusd_epi32`, we manage to find an operand order that
accomodates the instruction's requirements on signednesses.
ramiro050 pushed a commit to ramiro050/iree that referenced this pull request Dec 19, 2023
As noted in iree-org#15525 , it performs about a third faster, and our primary
use case in iree-org#15158 has no padding problem with that wider size.
ramiro050 pushed a commit to ramiro050/iree that referenced this pull request Dec 19, 2023
* Consistently compare with/without skipping of intermediate roundings.
A catch is that the ukernel may fall back to a generic code path (and
that fallback is consistently exercised by the test, even when a
non-fallback path is also available and tested). And generic code paths
("tile functions") never skipped intermediate roundings, even if allowed
to by the flag. This caused complicated test code retrying again on
error. This PR simply adds the skipping-intermediate-roundings generic
tile functions, so the test code is simpler, and concretely I just
needed that for iree-org#15543 as I'm adding bf16-accumulator tile functions
that are skipping intermediate roundings.
* I had to also update `iree-e2e-matmul-test` to switch to skipping
intermediate roundings. Unlike the ukernels' own tests, which really
must test both flavors, in `iree-e2e-matmul-test` we are e2e testing
what the compiler produces, and that is skippig intermediate roundings
at least by default, and while that could be overridden with
`--iree-llvmcpu-skip-intermediate-roundings=false`, we don't currently
test that in e2e matmul tests.
* Generate better random test input values. Some were too large - when
we generate random bfloat16 to accumulate into bfloat16, they better be
very small as we don't want to grow accumulators to the point where they
would start rounding. It's OK, because bfloat16 kernels use bfloat16
arithmetic instructions, not bit hacks, so correctness is sufficiently
tested on very small values. Conversely, for int8/int16 test input
values, we were generating a very narrow range and that was potentially
missing important coverage as some of our int kernels are starting to do
evil bit hacks (iree-org#15525).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants