-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactoring GPU extensions #1365
Refactoring GPU extensions #1365
Conversation
Note that |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1365 +/- ##
===========================================
- Coverage 79.59% 50.23% -29.36%
===========================================
Files 114 113 -1
Lines 9032 8979 -53
===========================================
- Hits 7189 4511 -2678
- Misses 1843 4468 +2625 ☔ View full report in Codecov by Sentry. |
So far I have updated the CUDA library to remove using statements from the module package. Next I am looking into this
|
…/ITensors.jl into kmp5/refactor/update_gpu_backends
…/ITensors.jl into kmp5/refactor/update_gpu_backends
return StoreT(data) | ||
end | ||
|
||
function generic_zeros(StoreT::Type{<:Dense}, dims::Tuple{Integer}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function generic_zeros(StoreT::Type{<:Dense}, dims::Tuple{Integer}) | |
function generic_zeros(StoreT::Type{<:Dense}, dims::Integer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My proposal was to define both of them in terms of ::Integer
, not ::Tuple{Integer}
, since it is a simpler interface for new types to overload in general.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only concern is that if you do something like this generic_zeros(Dense, (2,))
it calls the AbstractArray
version of this funciton
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, so then you should rewrite the AbstractArray
version so that doesn't happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I understand what you're suggesting to do. I reverted the code back the original design you suggested which works for generic_zeros(Dense, 2)
and generic_zeros(Dense, (2,))
@mtfishman If you have a minute can you please take a look at this PR again? Thank you I appreciate your time! |
Looks good, thanks! |
Description
In this PR I am working to address some refactoring issues. Such as removing using statements from module files, try to diagnose/fix GPU blocksparse backend failures, update adapt functions to use
storagemode
like AMDGPU, remove gpu examples from testing based on plan listed in google driveChecklist:
resize!
JuliaGPU/Metal.jl#279 for a reference on how it might be implemented).adapt
functions to havestoragemode
as the input parameterUpdate the Extensions libraries to move the bare using statements from the
NDTensorsXExt.jl
file to the files where they are necessary.NDTensors/ext/examples
.generic_zeros
andgeneric_rand
functions for Dense and AbstractArray