Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for more elemwise torch functions using broadcast_tensor and vmap #1032

Merged
merged 1 commit into from
Nov 19, 2024

Conversation

Ch0ronomato
Copy link
Contributor

@Ch0ronomato Ch0ronomato commented Oct 12, 2024

Description

In the event the operator Elemwise is broadcasting over doesn't have a direct torch function, we can leverage vmap and broadcast_tensors to replicate the ufunc machinery.

Related Issue

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

📚 Documentation preview 📚: https://pytensor--1032.org.readthedocs.build/en/1032/

@Ch0ronomato
Copy link
Contributor Author

I need to add a test, but I want to get feedback on #1031 before continuing.

@Ch0ronomato Ch0ronomato changed the title Use broadcast tensor Allow for more elemwise torch functions using broadcast_tensor and vmap Oct 12, 2024
@Ch0ronomato
Copy link
Contributor Author

I'll fix the tests.

Copy link

codecov bot commented Oct 15, 2024

Codecov Report

Attention: Patch coverage is 45.45455% with 6 lines in your changes missing coverage. Please review.

Project coverage is 82.10%. Comparing base (a570dbf) to head (c7e8a64).
Report is 14 commits behind head on main.

Files with missing lines Patch % Lines
pytensor/link/pytorch/dispatch/elemwise.py 45.45% 6 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1032      +/-   ##
==========================================
- Coverage   82.10%   82.10%   -0.01%     
==========================================
  Files         183      183              
  Lines       47924    47932       +8     
  Branches     8632     8634       +2     
==========================================
+ Hits        39348    39354       +6     
- Misses       6410     6413       +3     
+ Partials     2166     2165       -1     
Files with missing lines Coverage Δ
pytensor/link/pytorch/dispatch/elemwise.py 65.38% <45.45%> (-3.37%) ⬇️

... and 1 file with indirect coverage changes

---- 🚨 Try these New Features:

Copy link
Member

@ricardoV94 ricardoV94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only some nits left, PR looks great!


def elemwise_fn(*inputs):
Elemwise._check_runtime_broadcast(node, inputs)
shaped_inputs = torch.broadcast_tensors(*inputs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: more precise name:

Suggested change
shaped_inputs = torch.broadcast_tensors(*inputs)
broadcasted_inputs = torch.broadcast_tensors(*inputs)

Also needs to be changed below

Comment on lines 28 to 29
# @todo: This will fail for anything that calls
# `.item()`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove todo, not something that should be addressed in this impl, but for specific Ops, so if anything should exist as a github issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope i agree with you.

pytensor/link/pytorch/linker.py Outdated Show resolved Hide resolved
@Ch0ronomato
Copy link
Contributor Author

I loosen the failing test in #988 ; this should be good after that.

def elemwise_fn(*inputs):
Elemwise._check_runtime_broadcast(node, inputs)
return base_fn(*inputs)
if hasattr(scalar_op, "nfunc_spec") and hasattr(torch, scalar_op.nfunc_spec[0]):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should do the same trick for scipy.x you did in another PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...but then I have to write a new test for this PR 😅

pytensor/link/pytorch/linker.py Outdated Show resolved Hide resolved
@Ch0ronomato Ch0ronomato force-pushed the elemwise_torch_improvement branch 2 times, most recently from 97d6bdc to 1dd6322 Compare November 6, 2024 04:27
@Ch0ronomato Ch0ronomato force-pushed the elemwise_torch_improvement branch from 1dd6322 to c7e8a64 Compare November 6, 2024 04:30
@ricardoV94 ricardoV94 added enhancement New feature or request torch PyTorch backend labels Nov 19, 2024
@ricardoV94 ricardoV94 merged commit 6de3151 into pymc-devs:main Nov 19, 2024
61 of 62 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request torch PyTorch backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants