Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8007000E Not enough memory resources are available to complete this operation. #791

Closed
muhademan opened this issue Oct 10, 2022 · 7 comments
Assignees
Labels
stale Issues that haven't received updates

Comments

@muhademan
Copy link

when I run the dml_onnx.py file in (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py

I get an error like this:
(amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py
Fetching 19 files: 100%|███████████████████████████████████████████ | 19/19 [00:00<00:00, 1966.19it/s]
2022-10-10 10:05:28.0893026 [E:onnxruntime:, inference_session.cc:1484 onnxruntime::InferenceSession::Initialize::<lambda_70debc81dc7538bfc077b449cf61fe32>::operator()] Exception during initialization: D:\a_work\ 1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\BucketizedBufferAllocator.cpp(122)\onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) 800 7000E to enough memory resources tid(4790) complete this operation.

Traceback (most recent call last):
File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 220, in
image = pipe(prompt, height=512, width=768, num_inference_steps=50, guidance_scale=7.5, eta=0.0, execution_provider="DmlExecutionProvider")["sample"][0]
File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\amd-stable-diffusion\difusers-dml\examples\inference\dml_onnx.py", line 73, in call
unet_sess = ort.InferenceSession("onnx/unet.onnx", so, providers=[ep])
File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\amd-stable-diffusion\amd_venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src122)\onnxruntime\core\providers\dml\DmlExecutionProvider\src122). onnxruntime_pybind11_state.pyd!00007FFA82D145C8: (caller: 00007FFA8336D326) Exception(1) tid(4790) 8007000E Not enough memory resources are available to complete this operation.

how to fix it ?

@patrickvonplaten
Copy link
Contributor

@muhademan,

Could you please copy-paste a reproducible code snippet here? I currently have sadly no idea what code you ran that produced this error and therefore cannot really help :-/

@GreenLandisaLie
Copy link

Same thing happens to me often. I have a RX560 4G and 16G DDR3 and I can use the ONNX pipeline but every so often I get that error when initializing the script which I have by following this guide:
https://www.travelneil.com/stable-diffusion-windows-amd.html
This is the code:

from diffusers import StableDiffusionOnnxPipeline
pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")

prompt = "A happy celebrating robot on a mountaintop, happy, landscape, dramatic lighting, art by artgerm greg rutkowski alphonse mucha, 4k uhd'"

image = pipe(prompt).images[0]
image.save("output.png")

It might be worth mentioning that if I place the last 2 lines inside a while loop and the first image successfully starts to generate - then no matter how many times it loops I will not get that error. It only happens sometimes when first running the script - so my guess is this happens when loading the pipe with 'from_pretrained'. Its just weird to get a 'not enough memory' error that gets solved by running the script again without even closing any programs.

@patrickvonplaten
Copy link
Contributor

cc @anton-l here in case you have a hunch of what might be going on

@anton-l
Copy link
Member

anton-l commented Oct 12, 2022

Windows+AMD GPUs is a very unfamiliar territory for me, but maybe @pingzing or @harishanand95 could check it out :)

@claforte
Copy link

Hi, I'm @harishanand95's manager. There are critical limitations with the ONNXRuntime currently that make this inference path very sub-optimal. We're working hard to find much faster and leaner alternative solutions, but it's complicated and it takes time and effort. Thank you for your patience and I'm sorry you're running into these issues.

@averad
Copy link

averad commented Nov 2, 2022

@muhademan how much Vram and System ram do you have? Generating a 512x768 image takes more than 8GB of Vram and 16GB of System Ram using Onnx and DmlExecutionProvider.

@GreenLandisaLie when you encounter the reported issue and open Task Manager what is the GPU VRAM and System RAM usage? I believe a 4GB Card w/16 GB of System ram would be near the bare minimum and would at times run into a out of memory error when initializing the pipe depending on what is running on your Windows System.

@anton-l I've heard that the diffusers process can be broken into separate fragments, loading only parts of the model when needed, reducing the memory requirements. Example: https://github.com/neonsecret/stable-diffusion Might be something that could help with the Memory issues that users report.

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Nov 30, 2022
@github-actions github-actions bot closed this as completed Dec 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Issues that haven't received updates
Projects
None yet
Development

No branches or pull requests

6 participants