-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiment rechunking cupy array on DGX #59
Comments
@mrocklin during the weekend I was running some SVD benchmarks again and I've come across a very similar issue, that I think may be related to memory spilling. Could you confirm whether workers start to die when they run out of memory? That's exactly what was happening to me. |
Sorry, I meant to say when the worker's GPU runs out of memory. |
For the first kind of error, yes. I haven't plugged in the DeviceHostDisk spill mechanism yet. |
I was getting those errors even with DeviceHostDisk, unless I had some messed up configuration I didn't notice. That said, it may be that there's a bug and we need to test it better, I will do that soon. |
FWIW I've also had some very similar pains with rechunking (particularly in cases where an array needs to be flattened out). Needed a |
I may be wrong, but I think the issue here is not directly related to rechunking arrays, but rather to running out of device memory. |
Yes, I run out of device memory immediately after starting a computation that follows rechunking. Happy to dive into it further with you if it is of interest. |
Let me clean things up a bit and write down installation instructions. Then it'd be good to have people dive in. My thought was that @pentschev or @madsbk might be a better fit so that you don't get taken away from driving imaging applications. |
I will definitely dive into that, since I have a strong feeling that the memory spilling mechanism may not be working properly, or not active at all. How urgent is this for both of you? |
Not urgent. I recommend waiting until tomorrow at least.
…On Wed, May 29, 2019 at 2:10 PM Peter Andreas Entschev < ***@***.***> wrote:
I will definitely dive into that, since I have a strong feeling that the
memory spilling mechanism may not be working properly, or not active at
all. How urgent is this for both of you?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#59?email_source=notifications&email_token=AACKZTGRO4URF4VF5UEXXR3PX3IL7A5CNFSM4HQPFBW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWQLJTA#issuecomment-497071308>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACKZTBSFHHCIPX2USBYFATPX3IL7ANCNFSM4HQPFBWQ>
.
|
This is very likely related to #57, in fact, probably the same bug on device memory spilling. |
So I was checking this, and I can't reproduce any cuRAND errors. What I ultimately get instead is an out of memory error: Traceback (most recent call last):
File "dask-cuda-59.py", line 10, in <module>
y.compute()
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/dask/base.py", line 156, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/dask/base.py", line 399, in compute
return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/dask/base.py", line 399, in <listcomp>
return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/dask/array/core.py", line 828, in finalize
return concatenate3(results)
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/dask/array/core.py", line 3607, in concatenate3
return _concatenate2(arrays, axes=list(range(x.ndim)))
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/dask/array/core.py", line 228, in _concatenate2
return concatenate(arrays, axis=axes[0])
File "/home/nfs/pentschev/miniconda3/envs/rapids-0.7/lib/python3.7/site-packages/cupy/manipulation/join.py", line 49, in concatenate
return core.concatenate_method(tup, axis)
File "cupy/core/_routines_manipulation.pyx", line 563, in cupy.core._routines_manipulation.concatenate_method
File "cupy/core/_routines_manipulation.pyx", line 608, in cupy.core._routines_manipulation.concatenate_method
File "cupy/core/_routines_manipulation.pyx", line 637, in cupy.core._routines_manipulation._concatenate
File "cupy/core/core.pyx", line 134, in cupy.core.core.ndarray.__init__
File "cupy/cuda/memory.pyx", line 518, in cupy.cuda.memory.alloc
File "cupy/cuda/memory.pyx", line 1085, in cupy.cuda.memory.MemoryPool.malloc
File "cupy/cuda/memory.pyx", line 1106, in cupy.cuda.memory.MemoryPool.malloc
File "cupy/cuda/memory.pyx", line 934, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc
File "cupy/cuda/memory.pyx", line 949, in cupy.cuda.memory.SingleDeviceMemoryPool._malloc
File "cupy/cuda/memory.pyx", line 697, in cupy.cuda.memory._try_malloc
cupy.cuda.memory.OutOfMemoryError: out of memory to allocate 12800000000 bytes (total 38400000000 bytes) After some checking, I was able to confirm that dask-cuda has only ~5GB in the device LRU, all the rest is temporary CuPy memory (over 30GB). I'm not sure what we can do to make such cases to work, nor if we have an option at all. In this particular case, the amount of memory it tries to allocate is exactly the problem size I will think a bit more about this, if you have any suggestions, please let me know. |
To make sure I understand, the temporary CuPy memory here is likely from some sort of memory manager? |
No, I tried also disabling it. The temporary memory could be any intermediary buffers needed, for example, concatenation of multiple arrays or any other functions that can't write to input memory (and thus require some additional memory to store output). |
There has been great progress on that over the last year or so, I'm closing this as I don't think this is an issue anymore. |
Using the DGX branch, and the tom-ucx distributed branch, I'm playing with rechunking a large 2d array from by row to by column
This is a fun experiment because it's a common operation, stresses UCX a bit, and is currently quite fast (when it works).
I've run into the following problems:
Spilling to disk when I run out of device memory (I don't have any spill to disk things on at the moment)
Sometimes I get this error from the dask comm ucx code
Sometimes CURAND seems to dislike me
I don't plan to invesigate these personally at the moment, but I wanted to record the experiment somewhere (and this seems to currently be the best place?). I think that it might be useful to have someone like @madsbk or @pentschev look into this after the UCX and DGX work gets cleaned up a bit more.
The text was updated successfully, but these errors were encountered: