-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
install codegen header to torch/include #1405
base: main
Are you sure you want to change the base?
Conversation
ccff7d5
to
8620737
Compare
8620737
to
c3e1df3
Compare
@guangyey : this does not seem to work for me, I still don't get headers installed to
|
d6aa4e3
to
242ad4e
Compare
Commit extends existing CUDA test to cover XPU SyclExtension case for the same feature - `py_limited_api`. NOTE: THE CHANGE CAN NOT BE MERGED AS IS Change requires update of the commit pin for torch-xpu-ops. Requires: intel/torch-xpu-ops#1405 Signed-off-by: Dmitry Rogozhkin <[email protected]>
@guangyey : I see this resolved now after last changes. |
@@ -1,89 +1,95 @@ | |||
if(Codegen_GPU_cmake_included) | |||
if(Codegen_XPU_cmake_included) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's quite a change. I worry we might see more issues with torch code being updated and 2 things getting out of sync:
- torch-xpu-ops Codegen.cmake and related scripts with torch side versions of the same
- ops code in native/xpu folder which you've modified for some include files
Any chance we can start bringing pieces of this code into torch codebase itself? For example, any chance we can stop having Codegen.cmake here on torch-xpu-ops side?
Note: don't consider above as a request to do that in this PR. I am just trying to discuss.
242ad4e
to
0b53f33
Compare
Motivation
This PR addresses a code generation issue related to XPU. Currently, there are two separate codegen paths for XPU:
The corresponding build directories are:
build/aten/src/ATen
(for stock PyTorch)build/xpu/ATen
(for torch-xpu-ops)However, in the torch-xpu-ops codegen, we mistakenly omitted installing XPU op headers from
build/xpu/ATen/ops
tobuild/aten/src/ATen/ops
. This PR fixes the issue and also removes some unnecessary code for better maintainability.Solution
We copy the codegen from torch-xpu-ops to stock PyTorch
Additional Context
Fix pytorch/pytorch#145902