-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cut down memory requirements for same-split reshape where possible #873
Cut down memory requirements for same-split reshape where possible #873
Conversation
failures may be solved by #857 would need to merge to be certain |
Codecov Report
@@ Coverage Diff @@
## master #873 +/- ##
==========================================
- Coverage 95.50% 87.87% -7.64%
==========================================
Files 64 64
Lines 9579 9588 +9
==========================================
- Hits 9148 8425 -723
- Misses 431 1163 +732
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
CodeSee Review Map:Review in an interactive map View more CodeSee Maps Legend |
superseded by #1125 |
Description
When reshaping distributed DNDarrays:
new_split
is the same as the original split, andthen reshape locally via pytorch, stitch
local_reshaped
tensors along split axis, and balance.This allows us to bypass the memory-intensive implementation of the distributed
reshape
in many cases.Example:
Results on
master
, 2 processesResults on
enhancement/distributed_reshape_same_split
, 2 processes:Issue/s addressed: #874
Changes proposed:
Type of change
Due Diligence
Does this change modify the behaviour of other functions? If so, which?
no