-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REFACTOR/PASS] Formalize argument bind and match util #214
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ZihengJiang
approved these changes
Jul 4, 2017
rohanmukh
pushed a commit
to rohanmukh/tvm
that referenced
this pull request
Jul 16, 2021
…#214) * Add shape function for mlas_matmul * Fix lint * Add dynamic shape checking for mlas AlterOpLayout * Add testing for dynamic shape checking * Fix dense alterOpLayout
ylc
pushed a commit
to ylc/tvm
that referenced
this pull request
Sep 28, 2021
…#214) * Add shape function for mlas_matmul * Fix lint * Add dynamic shape checking for mlas AlterOpLayout * Add testing for dynamic shape checking * Fix dense alterOpLayout
vinx13
pushed a commit
to vinx13/tvm
that referenced
this pull request
Mar 9, 2022
Liam-Sturge
added a commit
to Liam-Sturge/tvm
that referenced
this pull request
Feb 3, 2023
This patch undoes the change that was put in place to prevent the build and installation of NNPACK from failing due to a renaming of the default branch to main by the NNPACK external dependency cpuinfo. See apache#13871 The issue has been fixed at the source by PR apache#214, so the change to `ubuntu_install_nnpack.sh` is no longer required: Maratyszcza/NNPACK#214
driazati
pushed a commit
that referenced
this pull request
Feb 7, 2023
This patch undoes the change that was put in place to prevent the build and installation of NNPACK from failing due to a renaming of the default branch to main by the NNPACK external dependency cpuinfo. See #13871 The issue has been fixed at the source by PR #214 which is now merged in to NNPACK, so the change to `ubuntu_install_nnpack.sh` is no longer required: Maratyszcza/NNPACK#214
LeiWang1999
added a commit
to LeiWang1999/tvm
that referenced
this pull request
Nov 8, 2024
…e#214) * Refactor tilelang dequantize module and add matmul_blocked_weight_only function * remove un-implemented code. * Implement BaseScheduler to wrap some related items. * lint fix * test skip * Refactor tilelang dequantize module and add matmul_blocked_weight_only function * test fix * hardware tuning demo * remove debug related items. * imlement tuner and cache fix * lint fix * test case fix. * Adapt Tuning Space generation with Roller * lint fix * Refactor select_scheduler function for fine-grained interface The select_scheduler function in the dense/__init__.py module has been refactored to use a fine-grained interface. This change provides more flexibility and enables the implementation of high-performance kernels. Update MatmulScheduler class in matmul_tensorcore.py The MatmulScheduler class in the matmul_tensorcore.py module has been updated to calculate the number of threads based on the block size and warp size. This ensures optimal GPU warp configuration for NVIDIA GPUs. Improve test_general_matmul_tilelang_kernel.py The test_general_matmul_tilelang_kernel.py module has been improved to include additional test cases and assertions for correctness. * Refactor select_scheduler function for fine-grained interface * Refactor NotImplementedError message in BaseTLHint class * Update submodule reference in 3rdparty/tvm * Refactor matmul_finetune function to use topk=20 for hardware-aware finetuning * Refactor submodule reference in 3rdparty/tvm * lint fix * Refactor test_general_matmul_tilelang_impl.py and test_tilelang_gemm.py * Refactor MatmulConfig to enable weight propagation on supported devices * Refactor test_general_matmul_tilelang_impl.py and test_general_matmul_tilelang_kernel.py to use centered random values for input tensors * test fix * test fix * Refactor flash attention tests to use centered random values for input tensors * Refactor flash attention tests to use centered random values for input tensors * Refactor flash attention tests to skip test if flash_attn is not installed * lint fix * test fix * test fix * test fix * Refactor quantization module imports * lint fix * Update yapf version in requirements-dev.txt and requirements-test.txt * Refactor shared memory to global memory storage in MatmulFineGrainScheduler * test fix * format * test fix * Refactor tensorcore policy to use list comprehension for readability * lint fix
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
add arg_binder.h for argument binding utilities