-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Torch dependency for importing triton, kernel execution and autotuning #204
Comments
All tutorials use Torch as a reference for both functionality and performance. We want to compare Triton's performance with the native Torch performance, not NumPy. So it's not just to make tensors, it's to give performance reference numbers. Also, it's preferrable to be able to run any tutorial on any device. |
Sorry about the confusion. This issue just uses the tutorial as an illustration of the runtime dependency on torch, |
I see. In this case, it would be better to open an issue for each particular case when a dependency on Torch seems unreasonable. Please note that any related changes outside of the CPU backend (third_party/cpu) should go through the upstream repo. |
<!--- The core Triton is a small number of people, and we receive many PRs (thank you!). To help us review your code more quickly, **if you are a new contributor (less than 3 PRs merged) we ask that you complete the following tasks and include the filled-out checklist in your PR description.** Complete the following tasks before sending your PR, and replace `[ ]` with `[x]` to indicate you have done them. --> Currently, torch is required for importing triton and performing autotuning. This seems like a relatively heavy runtime dependency in the context of the cpu backend, as numpy can easily be used instead. Opening here as suggested in triton-lang#205 to minimize future merge conflicts. Ideally there would be a test for this, but with the cpu backend out-of-tree this seems hard to test. See also triton-lang#204, triton-lang#205. # New contributor declaration - [x] I am not making a trivial change, such as fixing a typo in a comment. - [x] I have written a PR description following these [rules](https://cbea.ms/git-commit/#why-not-how). - [x] I have run `pre-commit run --from-ref origin/main --to-ref HEAD`. - Select one of the following. - [ ] I have added tests. - `/test` for `lit` tests - `/unittest` for C++ tests - `/python/test` for end-to-end tests - [x] This PR does not need a test because not (currently) easy to test and basic functionality should be covered by existing tests. - Select one of the following. - [x] I have not added any `lit` tests. - [ ] The `lit` tests I have added follow these [best practices](https://mlir.llvm.org/getting_started/TestingGuide/#filecheck-best-practices), including the "tests should be minimal" section. (Usually running Python code and using the instructions it generates is not minimal.)
Describe the bug
Consider the following, adapted from 01-vector-add.py to use numpy instead of torch. On gpu triton depends on torch for a number of reasons that would be hard to replace (e.g. interfacing with cuda from python), but on cpu torch is a relatively heavy dependency just to make tensors, and numpy is strictly smaller (as torch depends on numpy).
Currently this errors when torch is not installed with
Environment details
triton-cpu: daa7eb0
The text was updated successfully, but these errors were encountered: