-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPS backend out of memory #9133
Comments
8gb version of mac is not enought to have mps accerlating and pytorch 2.0 or mps is only work in 13+ version |
inter i7 core |
I have also experienced this runtime error while running the open-source version of Whisper on a 2019 Macbook built on an Intel i9 8-core CPU with 16GB RAM and an AMD Radeon Pro 5500M. I had previously been running a decoder simulation that runs perfectly on Google Colab, which is when the error we've both experienced first appeared, but reducing batch sizes massively made no difference to the error, which then started appearing in Whisper runs on audio files of negligible size. So I concluded that it wasn't really a memory error at all, whatever the error message may say. However, I extracted the Whisper code to another Jupyter Notebook and it ran perfectly on the GPU using the latest releases from Apple and PyTorch on Ventura macOS 13.3, with 13.0, as @elisezhu123 says , the minimum requirement. So the problem has "gone away" rather than being solved, but I'd suggest just rerunning your code in another clean notebook as a first step. The suggested "fix" with the environment variable is dangerous, and probably unnecessary, but if you do use it I'd try setting it to another value than 0.0; I think the default is 0.7, i.e. the GPU can use 70% memory, so maybe raise it a bit, but I really don't think memory is the problem; there's a "glitch" somewhere that changing notebooks fixes. Obviously very happy to be corrected on this if I am mistaken. |
So I can only switch to another computer, right? |
No - misunderstanding of "notebook". I meant that changing the code to another Jupyter (Anaconda3) notebook (not another physical Mac notebook) sorted the problem out for me, but since writing that it has come back again, so I am not sure that what I did solved it at all. There are some suggestions elsewhere that there may be an issue with MacOS Ventura 13.3 but I am not in a position to explore that. |
it is just the bug of 13.3… 13.2 works |
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this. |
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best. After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code Hope this helps. |
Same Problem here, any solution? |
I'm experiencing this with the latest commit of automatic and PyTorch v2 on my M1 8 GB running on macOS Ventura 13.3.1 (a). Click to see the stack trace
While normal image generation works, this often occurs if I'm trying to use control net, but not always. Couldn't really figure out what's the differentiator. I have almost all other apps closed to leave maximum RAM unused. What are my options to avoid this? I've noticed @brkirch is posting to discussions about Apple performance and has a fork at https://github.com/brkirch/stable-diffusion-webui/ with 14 commits ahead. Is this something that could speed up my poor performance or solve the "MPS backend out of memory" problem? Will it be ever merged to upstream? 🤔 |
I also keep having this issue if if scale the images on my M1 8Gb Mac Mini. |
anyway to work around the issue? would the recommended solution from the error help? and how to do it? Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit |
This seems to help, at least in my case: PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half |
Where do you put that line of code? |
I recommend reading the very good documentation on the PyTorch website which has examples showing how to use the MPS device and how to load data onto it.https://pytorch.org/docs/stable/notes/mps.htmlSent from my iPhoneOn 12 May 2023, at 21:39, Robert Dean ***@***.***> wrote:
Excuse me, could you please tell me how to activate the MPS mode. I don't quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we would have to download a specific version of PyTorch to utilize the Metal Performance Shaders (MPS) backend. This webpage on Apple explains it best.
After installing the specific version of PyTorch, you should be able to simply call the MPS backend. Personally, I utilize this line of code device = torch.device('mps') and you can check by calling on device and if it gives you back 'mps', you are good to go.
Hope this helps.
Where do you put that line of code?
device = torch.device('mps')
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
I think the latest automatic release with pytorch2 already does this for
you?
…On Sat 13. 5. 2023 at 8:59, pudepiedj ***@***.***> wrote:
I recommend reading the very good documentation on the PyTorch website
which has examples showing how to use the MPS device and how to load data
onto it.https://pytorch.org/docs/stable/notes/mps.htmlSent from my
iPhoneOn 12 May 2023, at 21:39, Robert Dean ***@***.***> wrote:
Excuse me, could you please tell me how to activate the MPS mode. I don't
quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we
would have to download a specific version of PyTorch to utilize the Metal
Performance Shaders (MPS) backend. This webpage on Apple explains it best.
After installing the specific version of PyTorch, you should be able to
simply call the MPS backend. Personally, I utilize this line of code device
= torch.device('mps') and you can check by calling on device and if it
gives you back 'mps', you are good to go.
Hope this helps.
Where do you put that line of code?
device = torch.device('mps')
—Reply to this email directly, view it on GitHub, or unsubscribe.You are
receiving this because you commented.Message ID: ***@***.***>
—
Reply to this email directly, view it on GitHub
<#9133 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACFGMM65ZJ63V6EYDELOWTXF4WNXANCNFSM6AAAAAAWLQ2XOU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I am not sure what you mean. PyTorch2 has MPS support through torch.mps and the PyTorch-nightly now at 2.1.0v20330512 also has it, but unless I have missed something the mps device must still be deliberately invoked because some hardware systems don’t have it. Please let me know if I am mistaken!Sent from my iPhoneOn 13 May 2023, at 08:16, Honza Javorek ***@***.***> wrote:
I think the latest automatic release with pytorch2 already does this for
you?
On Sat 13. 5. 2023 at 8:59, pudepiedj ***@***.***> wrote:
I recommend reading the very good documentation on the PyTorch website
which has examples showing how to use the MPS device and how to load data
onto it.https://pytorch.org/docs/stable/notes/mps.htmlSent from my
iPhoneOn 12 May 2023, at 21:39, Robert Dean ***@***.***> wrote:
Excuse me, could you please tell me how to activate the MPS mode. I don't
quite understand this.
On Mac, cuda doesn't work as it doesn't have a dedicated nvidia GPU. So we
would have to download a specific version of PyTorch to utilize the Metal
Performance Shaders (MPS) backend. This webpage on Apple explains it best.
After installing the specific version of PyTorch, you should be able to
simply call the MPS backend. Personally, I utilize this line of code device
= torch.device('mps') and you can check by calling on device and if it
gives you back 'mps', you are good to go.
Hope this helps.
Where do you put that line of code?
device = torch.device('mps')
—Reply to this email directly, view it on GitHub, or unsubscribe.You are
receiving this because you commented.Message ID: ***@***.***>
—
Reply to this email directly, view it on GitHub
<#9133 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACFGMM65ZJ63V6EYDELOWTXF4WNXANCNFSM6AAAAAAWLQ2XOU>
.
You are receiving this because you commented.Message ID:
***@***.***>
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
So the line of code Without this line ran first, when you move your model and data to device, If you are new to PyTorch and the usage of mps on mac, I encourage you to read loading data onto the mps here. It is important to know how to load data and model parameters onto devices if you wish to run large models quickly. Without them, it would probably take you hours and even days to run just one epoch. Hope this helps! |
PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --no-half (without --precision full) works perfectly for me. Since I added PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 I didn't encountered the bug and the 4 performance cores of my MacBook M1 are much used than before |
Total noob here. Trying to utilize stable diffusion with deforum extension. Where exactly do I input the PYTORCH_MPS_HIGH_WATERMARK code into? |
In terminal, type : |
Lifesaver. Thank you. It works now. |
tyvm sir, this works but it is painfully long 2,3 hours to upscale 2x an image from 640x950 res. Is there anyway to speed this up? what setting to adjust highres.fix? |
Have you tried all the Apple optimisation suggestions at
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon?
In the last paragraph there are specific suggestions about timing and how
to improve it.
…On Sat, May 13, 2023 at 10:45 PM akamitoro ***@***.***> wrote:
This seems to help, at least in my case:
PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half
tyvm sir, this works but it is painfully long 2,3 hours to upscale 2x an
image from 640x950 res. Is there anyway to speed this up? what setting to
adjust highres.fix?
—
Reply to this email directly, view it on GitHub
<#9133 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGG22YOVVNI73D4733H33RLXF76HTANCNFSM6AAAAAAWLQ2XOU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
I see what you mean. I was misunderstanding you to be suggesting that
PyTorch2 automatically selects the mps device, which I don't think it does.
Sorry for the confusion!
…On Sat, May 13, 2023 at 12:23 PM Honza Javorek ***@***.***> wrote:
What about this?
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/b08500cec8a791ef20082628b49b17df833f5dda/modules/devices.py#LL38C21
—
Reply to this email directly, view it on GitHub
<#9133 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGG22YLNYDNAW6V3WZLAYCTXF5VJPANCNFSM6AAAAAAWLQ2XOU>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
@pudepiedj no problem! |
Regarding the settings, you can put the environment variable to your #!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################
# Install directory without trailing slash
#install_dir="/home/$(whoami)"
# Name of the subdirectory
#clone_dir="stable-diffusion-webui"
# PyTorch settings
export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7
export PYTORCH_ENABLE_MPS_FALLBACK=1
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --no-half --opt-sub-quad-attention --use-cpu interrogate"
# python3 executable
#python_cmd="python3"
... file continues unchanged ... Then all you need to run your web UI is plain |
Does this in fact implement and use the MPS device? I've been investigating
over the weekend using the Activity Monitor "GPU History" display and I
don't think my GPU is being used at all; stable-diffusion is just running
on the CPU. This of course may explain why I am not getting the "MPS
Backend Out of Memory" error, too! :)
…On Mon, May 15, 2023 at 9:17 AM Honza Javorek ***@***.***> wrote:
Regarding the settings, you can put the environment variable to your
webui-user.sh as well. This is how my look like right now:
#!/bin/bash########################################################## Uncomment and change the variables below to your need:##########################################################
# Install directory without trailing slash#install_dir="/home/$(whoami)"
# Name of the subdirectory#clone_dir="stable-diffusion-webui"
# PyTorch settingsexport PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7export PYTORCH_ENABLE_MPS_FALLBACK=1
# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"export COMMANDLINE_ARGS="--skip-torch-cuda-test --upcast-sampling --no-half-vae --no-half --opt-sub-quad-attention --use-cpu interrogate"
# python3 executable#python_cmd="python3"
... file continues unchanged ...
—
Reply to this email directly, view it on GitHub
<#9133 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGG22YK442MUJIMJ54CVC3LXGHRATANCNFSM6AAAAAAWLQ2XOU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi, I guess you're also using stable diffusion with controlnet here. One easy way is to reduce your batch size. For eg. if you kept Batch size as 8, reduce to 4 or 5. or lastly just 1. It should work and would be faster.
|
Thank you! It works on my M2 Max device. It uses GPU instead of CPU. |
Hello, I have been trying to build a simple python GUI using tkinter for stable diffusion. I am always hitting the same issue since I'm using M1 mac. Here's my code, I tried adding the --skip-torch-cuda-test directly in my .py code but it's not working, please help. Error: RuntimeError: MPS backend out of memory (MPS allocated: 16.46 GB, other allocations: 1.98 GB, max allowed: 18.13 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). import os Set environment variablesos.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.9" Set command line argumentsos.environ["COMMANDLINE_ARGS"] = "--skip-torch-cuda-test --upcast-sampling --no-half-vae --no-half --opt-sub-quad-attention --use-cpu interrogate" SDV5_MODEL_PATH = "/Users/user/stable-diffusion-v1-5/" if not os.path.exists(SAVE_PATH): def uniquify(path):
prompt = "A dog rising in motorcycle" print(f"Characters in prompt: {len(prompt)}, limit: 200") pipe = StableDiffusionPipeline.from_pretrained(SDV5_MODEL_PATH) output = pipe(prompt) Use the images attribute to access the generated imagesimage = output.images[0] # Adjusted this line based on your findings Save the imageimage_path = uniquify(os.path.join(SAVE_PATH, (prompt[:25] + "...") if len(prompt) > 25 else prompt) + ".png") image.save(image_path) |
the default values can be seen in the source code: static const char* high_watermark_ratio_str = getenv("PYTORCH_MPS_HIGH_WATERMARK_RATIO");
const double high_watermark_ratio =
high_watermark_ratio_str ? strtod(high_watermark_ratio_str, nullptr) : default_high_watermark_ratio;
setHighWatermarkRatio(high_watermark_ratio);
const double default_low_watermark_ratio =
m_device.hasUnifiedMemory ? default_low_watermark_ratio_unified : default_low_watermark_ratio_discrete;
static const char* low_watermark_ratio_str = getenv("PYTORCH_MPS_LOW_WATERMARK_RATIO");
const double low_watermark_ratio =
low_watermark_ratio_str ? strtod(low_watermark_ratio_str, nullptr) : default_low_watermark_ratio;
setLowWatermarkRatio(low_watermark_ratio); // (see m_high_watermark_ratio for description)
constexpr static double default_high_watermark_ratio = 1.7;
// we set the allowed upper bound to twice the size of recommendedMaxWorkingSetSize.
constexpr static double default_high_watermark_upper_bound = 2.0;
// (see m_low_watermark_ratio for description)
// on unified memory, we could allocate beyond the recommendedMaxWorkingSetSize
constexpr static double default_low_watermark_ratio_unified = 1.4;
constexpr static double default_low_watermark_ratio_discrete = 1.0; // high watermark ratio is a hard limit for the total allowed allocations
// 0. : disables high watermark limit (may cause system failure if system-wide OOM occurs)
// 1. : recommended maximum allocation size (i.e., device.recommendedMaxWorkingSetSize)
// >1.: allows limits beyond the device.recommendedMaxWorkingSetSize
// e.g., value 0.95 means we allocate up to 95% of recommended maximum
// allocation size; beyond that, the allocations would fail with OOM error.
double m_high_watermark_ratio; |
@pudepiedj @branksypop @honzajavorek
Thanks, apparently, my torch installation at M1 was having a problem. I've reinstalled it and it's now working. Now, I received a new error: NotImplementedError: The operator 'aten::index.Tensor' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on pytorch/pytorch#77764. As a temporary fix, you can set the environment variable --> Essentially here's what's happening for Apple silicon user: Option #1: GPU (not possible), Option #2: CPU (I tried it, takes 30 minutes to generate 1 picture), Option #3: MPS -> But I have this new error above. Option #4: Try to use AUTOMATIC1111 which impressively generates 1 picture for only 20 seconds; however, it's not customisable, say if you want to build something like that as a project for a client. So yeah, it's the painful situation for Apple silicon users wanting to build an AI program using SD from scratch. |
Replacing this code will allow you to map it, but the ControlNet functionality will not work properly
Replacing this code will allow you to map it, but the ControlNet functionality will not work properly |
Thanks, I presume this answer is for AUTOMATIC1111 users, correct? This won't be applicable for those who are building their customised program using stable diffusion, from scratch, as all of the dependencies will need to be done. Editing webui.sh is not applicable for this scenario. Looking forward from someone who was able to run stable diffusion successfully in their Apple silicon machines using MPS (not CPU) with their own customised program. |
I run this in 13.4.1 but also have the same problem |
For me the problem was the canvas size (1280x720), so i used something smaller (640x320) and i got no more mps problems, in case you need higher resolutions, create your images/videos with small resolutions and then use Topaz another AI which will do the job of increasing size and quality |
Hello my error is basically the same "RuntimeError: MPS backend out of memory" I tried several of the methods mentioned here and unfortunately I had no success, to be very specific I could not use the "Hires. fix" option, the process was always interrupted by this error, so I could not make images of format greater than 768x768. Today in the morning with the help of ChatGPT4 I could solve the bug and I leave how I could solve it, it comes in a very synthesized way I hope it is useful. install miniconda (if you already have it, skip this step) curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh
sh Miniconda3-latest-MacOSX-arm64.sh Create a new virtual environment with Conda specifying Python 3.9. Copy and paste the following command into the terminal: conda create --name "your name" python=3.9 Activate the virtual environment. Copy and paste the following command into the terminal: conda activate "your name previously" Install PyTorch in the virtual environment. Copy and paste the following command into the terminal: conda install pytorch torchvision torchaudio -c pytorch-nightly This step is used to check if MPS (Metal Performance Shaders) is available and, if so, it creates a tensor on the MPS device and prints it. It is a way to validate that everything is working correctly. import torch
if torch.backends.mps.is_available():
device = torch.device('mps')
x = torch.ones(1, device=device)
print(x)
else:
print("MPS device not found.") Run the Python file. Copy and paste the following command into the terminal: python mps_test.py tensor([1.], device='mps:0') This has to be your result in order to work smoothly. If you are experiencing memory problems with the MPS backend, you can adjust the proportion of memory PyTorch is allowed to use. Finally copy and paste the following command into the terminal: export PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 At this point, the only thing left to do is to start the UI |
Hello, same steps I did but using Anaconda. The best time I have in CPU is 6 minutes (for a single image) with an entry level MBP M1 Pro (2021). Are you able to successfully generate an image from your customised program (not AUTOMATIC1111) without encountering the error? If yes, feel free to share the code or tweaks you made. Essentially, the underlying issue is you can use AUTOMATIC1111 and generate all the images you want with MPS because it has made a lot of changes in the backend with embeddings, etc. so no issues on that. Problem starts if you create your own python program (not AUTOMATIC1111) with stable diffusion and generate an image, it will always prompt that error about MPS. Workaround is changing it to CPU, or resort to using a device with a GPU/CUDA like a windows laptop or PC. |
tensor([1.], device='mps:0') <--- that's the result from my machine which means MPS is activated. Looking forward if someone would like to share their code if they are able to successfully generate an image with MPS as device in an apple silicon machine. |
I'm sorry it was not helpful, after several days this worked for me, I will try to test more variables to see if I can find another alternative. |
I run this in 13.5, same problem. 2.3 GHz 8-Core Intel Core i9 |
you have already tried this? "https://developer.apple.com/metal/pytorch/". and this? |
add PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 in the command you use to start WebUI. |
Based on comment from AUTOMATIC1111#9133 (comment) Using GPU is slower for some reason and lags my computer
Thank you. @efeLongoria I was able to produce same out put tensor([1.], device='mps:0' however I am still encountering the same issue. MPS backend out of memory (MPS allocated: 6.50 GB, other allocations: 29.72 GB, max allowed: 36.27 GB). Tried to allocate 128.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). I have been reading all the comments and some people did fix it by PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half. What is ./webui.sh? where should I download certain file? By the way I am using M2 with 32GB |
I had the same problem with Comfyui running vid2vid and received this error: I fixed it by rebooting comfyui via this command: |
I have a hunch that GPU VRAM may not be getting flushed correctly by A1111 after generations when running on MacOS installations leveraging PyTorch and MPS, since I'm seeing VRAM usage increase after each consecutive image generation (Intel Mac Pro with AMD GPU) until between gen 5 and 10 I get the "MPS Backend out of memory" error, forcing me to restart SD Web UI to complete more generations. To any engineer looking to fix this in the A1111 codebase, this article may be useful: https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530 particularly this comment https://discuss.pytorch.org/t/how-can-we-release-gpu-memory-cache/14530/27 also https://forums.fast.ai/t/gpu-memory-not-being-freed-after-training-is-over/10265?u=cedric |
With @viking1304's help, I've tested a1111 with PyTorch 2.3.0.dev20240103 today on my aforementioned Mac Pro 2019 Intel + AMD 6900XT GPU rig and am no longer getting this MPS Out of Memory error! Yay! Installed latest PyTorch dev version using viking1304's A1111 installer - https://github.com/viking1304/a1111-setup |
I have this problem in terminal after this command. help me to solve it( (base) MacBook-Pro-2:~ aleksendrmykolaienko$ PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half |
I am also facing the same problem, where to put these lines of code |
In the terminal, you need RUN the SD whit that command |
What do you mean by SD? |
Stable Diffusion |
I am facing a similar issue for llama32. I am using "Llama-3.2-3B-Instruct" in Pytorch. I have Mac M1 Pro, 16 GB. |
Try using Chrome |
Is there an existing issue for this?
What happened?
MacOS,已经顺利进入http://127.0.0.1:7860/网站,但是生成图片出现这个错误
RuntimeError: MPS backend out of memory (MPS allocated: 5.05 GB, other allocations: 2.29 GB, max allowed: 6.77 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)
Steps to reproduce the problem
安装MPS
What should have happened?
RuntimeError: MPS backend out of memory (MPS allocated: 5.05 GB, other allocations: 2.29 GB, max allowed: 6.77 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)
Commit where the problem happens
A python: 3.10.10 • torch: 1.12.1 • xformers: N/A • gradio: 3.16.2 • commit: 0cc0ee1 • checkpoint: bf864f41d5
What platforms do you use to access the UI ?
MacOS
What browsers do you use to access the UI ?
Apple Safari
Command Line Arguments
List of extensions
NO
Console logs
RuntimeError: MPS backend out of memory (MPS allocated: 5.05 GB, other allocations: 2.29 GB, max allowed: 6.77 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure)
Additional information
No response
The text was updated successfully, but these errors were encountered: