Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REFACTOR/PASS] Formalize argument bind and match util #214

Merged
merged 2 commits into from
Jul 4, 2017
Merged

[REFACTOR/PASS] Formalize argument bind and match util #214

merged 2 commits into from
Jul 4, 2017

Conversation

tqchen
Copy link
Member

@tqchen tqchen commented Jul 4, 2017

add arg_binder.h for argument binding utilities

@tqchen tqchen merged commit 4bb3c35 into apache:master Jul 4, 2017
@tqchen tqchen deleted the binder branch July 5, 2017 04:25
rohanmukh pushed a commit to rohanmukh/tvm that referenced this pull request Jul 16, 2021
…#214)

* Add shape function for mlas_matmul

* Fix lint

* Add dynamic shape checking for mlas AlterOpLayout

* Add testing for dynamic shape checking

* Fix dense alterOpLayout
ylc pushed a commit to ylc/tvm that referenced this pull request Sep 28, 2021
…#214)

* Add shape function for mlas_matmul

* Fix lint

* Add dynamic shape checking for mlas AlterOpLayout

* Add testing for dynamic shape checking

* Fix dense alterOpLayout
vinx13 pushed a commit to vinx13/tvm that referenced this pull request Mar 9, 2022
Liam-Sturge added a commit to Liam-Sturge/tvm that referenced this pull request Feb 3, 2023
This patch undoes the change that was put in place to prevent the build
and installation of NNPACK from failing due to a renaming of the default
branch to main by the NNPACK external dependency cpuinfo.

See apache#13871

The issue has been fixed at the source by PR apache#214, so the change to
`ubuntu_install_nnpack.sh` is no longer required:

Maratyszcza/NNPACK#214
driazati pushed a commit that referenced this pull request Feb 7, 2023
This patch undoes the change that was put in place to prevent the build and installation of NNPACK from failing due to a renaming of the default branch to main by the NNPACK external dependency cpuinfo.

See #13871

The issue has been fixed at the source by PR #214 which is now merged in to NNPACK, so the change to `ubuntu_install_nnpack.sh` is no longer required:

Maratyszcza/NNPACK#214
LeiWang1999 added a commit to LeiWang1999/tvm that referenced this pull request Nov 8, 2024
…e#214)

* Refactor tilelang dequantize module and add matmul_blocked_weight_only function

* remove un-implemented code.

* Implement BaseScheduler to wrap some related items.

* lint fix

* test skip

* Refactor tilelang dequantize module and add matmul_blocked_weight_only function

* test fix

* hardware tuning demo

* remove debug related items.

* imlement tuner and cache fix

* lint fix

* test case fix.

* Adapt Tuning Space generation with Roller

* lint fix

* Refactor select_scheduler function for fine-grained interface

The select_scheduler function in the dense/__init__.py module has been refactored to use a fine-grained interface. This change provides more flexibility and enables the implementation of high-performance kernels.

Update MatmulScheduler class in matmul_tensorcore.py

The MatmulScheduler class in the matmul_tensorcore.py module has been updated to calculate the number of threads based on the block size and warp size. This ensures optimal GPU warp configuration for NVIDIA GPUs.

Improve test_general_matmul_tilelang_kernel.py

The test_general_matmul_tilelang_kernel.py module has been improved to include additional test cases and assertions for correctness.

* Refactor select_scheduler function for fine-grained interface

* Refactor NotImplementedError message in BaseTLHint class

* Update submodule reference in 3rdparty/tvm

* Refactor matmul_finetune function to use topk=20 for hardware-aware finetuning

* Refactor submodule reference in 3rdparty/tvm

* lint fix

* Refactor test_general_matmul_tilelang_impl.py and test_tilelang_gemm.py

* Refactor MatmulConfig to enable weight propagation on supported devices

* Refactor test_general_matmul_tilelang_impl.py and test_general_matmul_tilelang_kernel.py to use centered random values for input tensors

* test fix

* test fix

* Refactor flash attention tests to use centered random values for input tensors

* Refactor flash attention tests to use centered random values for input tensors

* Refactor flash attention tests to skip test if flash_attn is not installed

* lint fix

* test fix

* test fix

* test fix

* Refactor quantization module imports

* lint fix

* Update yapf version in requirements-dev.txt and requirements-test.txt

* Refactor shared memory to global memory storage in MatmulFineGrainScheduler

* test fix

* format

* test fix

* Refactor tensorcore policy to use list comprehension for readability

* lint fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants