Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

install codegen header to torch/include #1405

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

guangyey
Copy link
Contributor

@guangyey guangyey commented Feb 25, 2025

Motivation

This PR addresses a code generation issue related to XPU. Currently, there are two separate codegen paths for XPU:

  1. Stock PyTorch – Generates code for oneDNN ops.
  2. torch-xpu-ops – Generates code for SYCL kernel ops.

The corresponding build directories are:

  1. build/aten/src/ATen (for stock PyTorch)
  2. build/xpu/ATen (for torch-xpu-ops)

However, in the torch-xpu-ops codegen, we mistakenly omitted installing XPU op headers from build/xpu/ATen/ops to build/aten/src/ATen/ops. This PR fixes the issue and also removes some unnecessary code for better maintainability.

Solution

We copy the codegen from torch-xpu-ops to stock PyTorch

Additional Context

Fix pytorch/pytorch#145902

@guangyey guangyey force-pushed the guangyey/codegen branch 15 times, most recently from ccff7d5 to 8620737 Compare February 25, 2025 12:20
@guangyey guangyey requested a review from EikanWang February 25, 2025 12:54
@dvrogozh
Copy link
Contributor

@guangyey : this does not seem to work for me, I still don't get headers installed to torch/include/. Do we need any changes on pytorch side as well (outside of torch-xpu-ops)?

$ find . -name cat_xpu_dispatch.h
./build/xpu/ATen/ops/cat_xpu_dispatch.h
./build/aten/src/ATen/ops/cat_xpu_dispatch.h
$ find . -name cat_cuda_dispatch.h
./build/aten/src/ATen/ops/cat_cuda_dispatch.h
./torch/include/ATen/ops/cat_cuda_dispatch.h

@guangyey guangyey force-pushed the guangyey/codegen branch 8 times, most recently from d6aa4e3 to 242ad4e Compare February 26, 2025 08:29
dvrogozh added a commit to dvrogozh/pytorch that referenced this pull request Feb 26, 2025
Commit extends existing CUDA test to cover XPU SyclExtension
case for the same feature - `py_limited_api`.

NOTE: THE CHANGE CAN NOT BE MERGED AS IS

Change requires update of the commit pin for torch-xpu-ops.

Requires: intel/torch-xpu-ops#1405
Signed-off-by: Dmitry Rogozhkin <[email protected]>
@dvrogozh
Copy link
Contributor

this does not seem to work for me, I still don't get headers installed to torch/include/

@guangyey : I see this resolved now after last changes.

@@ -1,89 +1,95 @@
if(Codegen_GPU_cmake_included)
if(Codegen_XPU_cmake_included)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's quite a change. I worry we might see more issues with torch code being updated and 2 things getting out of sync:

  • torch-xpu-ops Codegen.cmake and related scripts with torch side versions of the same
  • ops code in native/xpu folder which you've modified for some include files

Any chance we can start bringing pieces of this code into torch codebase itself? For example, any chance we can stop having Codegen.cmake here on torch-xpu-ops side?

Note: don't consider above as a request to do that in this PR. I am just trying to discuss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

xpu: installed pytorch is missing aten xpu ops headers (ATen/ops/cat_xpu_dispatch.h and others)
2 participants