-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tuning] gemm tuning script v3.3 #606
Conversation
678c36c
to
f9a7115
Compare
33436e8
to
f1601ef
Compare
2e4ee4d
to
3e84965
Compare
3e84965
to
6138ae6
Compare
~/.local/bin/yapf -i --style='{based_on_style: pep8}' tune_gemm.py
6138ae6
to
cd736ce
Compare
591550a
to
60760b6
Compare
60760b6
to
6ee25cc
Compare
There seems issue of dealing with M=1 ?
|
This should be fixed with 1daec1f |
@@ -41,8 +42,12 @@ def matmul_kernel( | |||
acc_dtype = tl.float32 if a_ptr.type.element_ty != tl.int8 else tl.int32 | |||
accumulator = tl.zeros((BLOCK_SIZE_M, BLOCK_SIZE_N), dtype=acc_dtype) | |||
for k in range(0, tl.cdiv(K, BLOCK_SIZE_K * SPLIT_K)): | |||
a = tl.load(a_ptrs) | |||
b = tl.load(b_ptrs) | |||
if EVEN_K: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For line 35 and 36, maybe we can add the module of M
and N
of the offset added to a_ptr
and b_ptr
to support random M and N values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can, but that will harm performance a lot.
Let me create a ticket for this issue.
Is there anything need to change for the script |
c26e317
to
a61967b
Compare
No, since we don't change any API to the script. |
@xiaohuguo2023 @vgokhale @scxiao Re Compilation stage: During compilation, each thread will query the GPU info, such as I tried to set Therefore, I introduced a very hacky option, i.e. Profiling stage: This is very tricky.
This could be some settings in my own docker, so could you confirm if this is also the case in your environment? Another thing regarding the profiling stage. I found that invoking rocprof/rocprofv2 will make all GPUs busy for a very short period of time before the kernel start executing. I suspect this is due to rocprof/rocprofv2 query all GPU information in the system. I'm not sure if we can avoid this, but the GPU busy time is definitely insignificant. |
Yeah, I have the similar observation, this is my setting
and my rocm-smi
|
@xiaohuguo2023 Thanks for confirmation. This is weird. I'll file a ticket for this issue. |
4c94471
to
68af100
Compare
68af100
to
e979474
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks for addressing the comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, excellent work
* Copy *tune_gemm* from `triton-mlir` branch to `main_perf` branch The source commit in `triton-mlir` branch is the following one: ``` commit cf44637 Author: Lixun Zhang <[email protected]> Date: Tue Jul 23 14:22:01 2024 -0500 [tuning] gemm tuning script v3.3 (#606) ``` *tune_gemm* was copied from the source branch directory `scripts/amd/gemm` to the destination branch directory `python/perf-kernels/tune_gemm`. The SHA-256 hashes of *tune_gemm* files are the following ones: ``` 423aef1deb6c60f6578a1ecfc94d2473f8746b00d0368c553d31641fcfa5e354 README.md 46ab93978fee33f75df23332f12546dae7910478c391f08b7b1ebd415d8266b7 icache_flush.py f18711544641b810a652e6a6629bfa2b613f6ade87399e88fdf05b81d4af58a4 matmul.py 84a1c80ede36d3154e51188276eda2d2d0f52ed4f496ff69349c390d83b8ec10 matmul_kernel.py 2812b40183637bc8d7e47d283c7d66b1792134a43de76f3eacf7b9b3e1c2431a one_config.py 0ac09c33b0173cea06ddabbf9f4e3afa1816781dea4fdcce5894a7e7d6a80e19 rocprof_gemm.py 00eff41cf1c0bfc41d623e42b51706af67639fec76146741e2067d2a93e0148a utils/file_generator.py cb7afb773ccee835b00396cccf87e0d44fe513131161f031fae42453725b3c82 utils/utils.py 59f23811b660e49e566927853926a21f02a7014bb19c8ea67e6b382db6c59900 tune_gemm.py e787f35d750b869f113b3c01692f64243a9cb8a71a18ade2f0465f614f7284e4 tune_gemm.sh ``` The files were kept as-is despite `pre-commit` intentions to change them. After that, *tune_gemm* directory in code and documentation was fixed to reflect it's new location.
* Copy *tune_gemm* from `triton-mlir` branch to `main_perf` branch The source commit in `triton-mlir` branch is the following one: ``` commit cf44637 Author: Lixun Zhang <[email protected]> Date: Tue Jul 23 14:22:01 2024 -0500 [tuning] gemm tuning script v3.3 (#606) ``` *tune_gemm* was copied from the source branch directory `scripts/amd/gemm` to the destination branch directory `python/perf-kernels/tune_gemm`. The SHA-256 hashes of *tune_gemm* files are the following ones: ``` 423aef1deb6c60f6578a1ecfc94d2473f8746b00d0368c553d31641fcfa5e354 README.md 46ab93978fee33f75df23332f12546dae7910478c391f08b7b1ebd415d8266b7 icache_flush.py f18711544641b810a652e6a6629bfa2b613f6ade87399e88fdf05b81d4af58a4 matmul.py 84a1c80ede36d3154e51188276eda2d2d0f52ed4f496ff69349c390d83b8ec10 matmul_kernel.py 2812b40183637bc8d7e47d283c7d66b1792134a43de76f3eacf7b9b3e1c2431a one_config.py 0ac09c33b0173cea06ddabbf9f4e3afa1816781dea4fdcce5894a7e7d6a80e19 rocprof_gemm.py 00eff41cf1c0bfc41d623e42b51706af67639fec76146741e2067d2a93e0148a utils/file_generator.py cb7afb773ccee835b00396cccf87e0d44fe513131161f031fae42453725b3c82 utils/utils.py 59f23811b660e49e566927853926a21f02a7014bb19c8ea67e6b382db6c59900 tune_gemm.py e787f35d750b869f113b3c01692f64243a9cb8a71a18ade2f0465f614f7284e4 tune_gemm.sh ``` The files were kept as-is despite `pre-commit` intentions to change them. After that, *tune_gemm* directory in code and documentation was fixed to reflect it's new location.
Please check the README for changes introduced in v3.3.
This PR enables
Example tuning session of 2 gemm sizes
The elapsed time of the kernel is very small, so hw noises play more roles here. This example is to demonstrate the compilation time of the tuning process.
One thing to note is that the second gemm's compilation time is much smaller than the first one, indicating cache reuse between the two gemms.
cc+ @xiaohuguo2023 You can try this one on your large-sample stream-K tuning to see if it helps.