Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve LHS tensor.pack on non-f32 types for x86 #15441

Open
Tracked by #16314
hanhanW opened this issue Nov 6, 2023 · 3 comments
Open
Tracked by #16314

Improve LHS tensor.pack on non-f32 types for x86 #15441

hanhanW opened this issue Nov 6, 2023 · 3 comments
Assignees
Labels
codegen/llvm LLVM code generation compiler backend codegen Shared code generation infrastructure and dialects

Comments

@hanhanW
Copy link
Contributor

hanhanW commented Nov 6, 2023

We have optimized codegen for packing on f32 types, but not int8. This is a tracking issue for int8 case. I observed that some pack ops are not vectorized. Because masking is only supported on limited ops for dynamic shapes. We should just relax the condition to using isElementwise(), so linalg.transpose op can also get vectorized. I have an easy fix locally, and will send it out for review.

With the change and better distribution logic, we can save up to 43% total dispatch sizes for int8 models, see https://gist.github.com/iree-github-actions-bot/fa5becb880b9a6afc2d362883a585d5a

The next step is having better pack codegen for non-f32 types. We need a pattern to pack innermost tile being a single element and leverage it to 16x16 transpose lowering. Looking at transpose permutation map and using vector.bitcast op should help here.

@hanhanW hanhanW added codegen Shared code generation infrastructure and dialects codegen/llvm LLVM code generation compiler backend labels Nov 6, 2023
@hanhanW hanhanW self-assigned this Nov 6, 2023
@hanhanW
Copy link
Contributor Author

hanhanW commented Nov 6, 2023

llvm/llvm-project#71454 fixes the vectorization issue.

hanhanW added a commit that referenced this issue Nov 7, 2023
It disables special vector sizes for non-f32 cases because the logic is
only for 16x16 transpose cases. The improvements of dispatch sizes are
from vectorization. We are not able to vectorize named ops if they have
dynamic shapes, which is fixed by
llvm/llvm-project@03529b9.
The change allows backends to vectorize them because they become static
shapes (by tiling with size=1). It is not a hard condition; we track it
in #15441

The revision takes the number of threads into account, so we have better
performance on multi-threaded. It also reduces runtime overheads.

This is a step toward to #15391
and #15349

It improves the performance of
[tensor.pack](#15349) op from 420
ms to 170 ms on 8-threaded x86 CPU.
ramiro050 pushed a commit to ramiro050/iree that referenced this issue Dec 19, 2023
It disables special vector sizes for non-f32 cases because the logic is
only for 16x16 transpose cases. The improvements of dispatch sizes are
from vectorization. We are not able to vectorize named ops if they have
dynamic shapes, which is fixed by
llvm/llvm-project@03529b9.
The change allows backends to vectorize them because they become static
shapes (by tiling with size=1). It is not a hard condition; we track it
in iree-org#15441

The revision takes the number of threads into account, so we have better
performance on multi-threaded. It also reduces runtime overheads.

This is a step toward to iree-org#15391
and iree-org#15349

It improves the performance of
[tensor.pack](iree-org#15349) op from 420
ms to 170 ms on 8-threaded x86 CPU.
@hanhanW hanhanW changed the title Improve tensor.pack on int8 for x86 Improve tensor.pack on non-f32 types for x86 Jan 4, 2024
@hanhanW
Copy link
Contributor Author

hanhanW commented Jan 8, 2024

This is the details about 16x16 transpose trick. The 4x4, 8x8, 16x16 tricks have the same idea. https://stackoverflow.com/questions/29519222/how-to-transpose-a-16x16-matrix-using-simd-instructions
Implementation: https://github.com/llvm/llvm-project/blob/main/mlir/lib/Dialect/Vector/Transforms/LowerVectorTranspose.cpp

In your prototype, I think we can add the bitcast pattern before transpose lowering, i.e., https://github.com/openxla/iree/blob/bd2c92dbb3d2109cd624fa18e75b9bf3caaa4ae5/compiler/src/iree/compiler/Codegen/LLVMCPU/LLVMCPUVectorLowering.cpp#L148

You can preset lowering_config; the 16x16 shuffle optimization should be kicked in automatically. It should give us a much better performance. If it works, then we can teach tile size selection about it at https://github.com/openxla/iree/blob/bd2c92dbb3d2109cd624fa18e75b9bf3caaa4ae5/compiler/src/iree/compiler/Codegen/LLVMCPU/KernelDispatch.cpp#L1259-L1270

Here is the benchmark data when I implemented the trick for f32 types: #13318

There can still be some performance improvements left on the table. We should be able to replace the vunpck*pd ones with a combination of shufps + blends that should be faster, in theory. The review comments in https://reviews.llvm.org/D148685 is very helpful to me.

@hanhanW hanhanW changed the title Improve tensor.pack on non-f32 types for x86 Improve LHS tensor.pack on non-f32 types for x86 Feb 5, 2024
@hanhanW
Copy link
Contributor Author

hanhanW commented Feb 5, 2024

I have some prototype in https://github.com/hanhanW/iree/tree/improve-pack, but need to structure them better. The next step in mind to try are

  1. Generate vector.bitcast around vector.transpose in virtual vector lowering stage
  2. Add a pattern matching for vector.transfer_read -> shape_cast -> bitcast to help flatten them correctly.
  3. Similiar pattern for vector.transfer_write chain.
  4. Unroll bitcast to 1d vectors, which is prototyped in the branch.

2 and 3 are needed. Otherwise we will generate bunch of scalar vector.bitcast op during 4.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
codegen/llvm LLVM code generation compiler backend codegen Shared code generation infrastructure and dialects
Projects
None yet
Development

No branches or pull requests

2 participants