Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error on running run.bat (Connection Errored out) #1112

Closed
vivekratr opened this issue Dec 1, 2023 · 16 comments
Closed

Getting error on running run.bat (Connection Errored out) #1112

vivekratr opened this issue Dec 1, 2023 · 16 comments

Comments

@vivekratr
Copy link

can you please help me? when i click on run.bat then the localserver starts but after giving the prompt it automatically get closed, and the web ui shows error on it. There is nothing getting consoled in the terminal. please help

@CodeAadarsh
Copy link

can you please share a screenshot of the terminal and web ui ?

@dagweg
Copy link

dagweg commented Dec 1, 2023

I'm having the same issue
here is the screenshots you asked for. Any help appreciated...Thanks

image
image

@akash-cs13
Copy link

akash-cs13 commented Dec 3, 2023

I have the same problem, running it on 8GB RAM and nvedia1650 4GB. Was someone able to fix it?
Screenshot 2023-12-03 073452
It pauses automatically without any errors

@anteAutomate
Copy link

Same issue here, didn't find a fix yet

@AntonDVP
Copy link

AntonDVP commented Dec 5, 2023

Same here. RTX3060 laptop with 6GB of VRAM and 12th gen i7 + 16Gb of RAM

@Ragnarok700
Copy link

Same issue here too. I'm using a NVIDIA GeForce GTX 970 (4 GB) + 32 GB system RAM and running the model in low VRAM mode.
image
image

This was referenced Dec 7, 2023
@Ragnarok700
Copy link

Ragnarok700 commented Dec 7, 2023

Small update; I tried connecting using Chrome instead of Firefox and the interface still didn't really load... but instead of seeing the script exit/pause, I got this error:

Y:\Documents\AI stuff>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.824
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
Total VRAM 4096 MB, total RAM 32711 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
Set vram state to: LOW_VRAM
Disabling smart memory management
Device: cuda:0 NVIDIA GeForce GTX 970 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "Y:\Documents\AI stuff\Fooocus\modules\async_worker.py", line 25, in worker
    import modules.default_pipeline as pipeline
  File "Y:\Documents\AI stuff\Fooocus\modules\default_pipeline.py", line 253, in <module>
    refresh_everything(
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "Y:\Documents\AI stuff\Fooocus\modules\default_pipeline.py", line 233, in refresh_everything
    refresh_base_model(base_model_name)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "Y:\Documents\AI stuff\Fooocus\modules\default_pipeline.py", line 69, in refresh_base_model
    model_base = core.load_model(filename)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "Y:\Documents\AI stuff\Fooocus\modules\core.py", line 152, in load_model
    unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings)
  File "Y:\Documents\AI stuff\Fooocus\backend\headless\fcbh\sd.py", line 458, in load_checkpoint_guess_config
    clip = CLIP(clip_target, embedding_directory=embedding_directory)
  File "Y:\Documents\AI stuff\Fooocus\backend\headless\fcbh\sd.py", line 101, in __init__
    self.cond_stage_model = clip(**(params))
  File "Y:\Documents\AI stuff\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 40, in __init__
    self.clip_l = sd1_clip.SDClipModel(layer="hidden", layer_idx=11, device=device, dtype=dtype, layer_norm_hidden_state=False)
  File "Y:\Documents\AI stuff\Fooocus\backend\headless\fcbh\sd1_clip.py", line 83, in __init__
    self.transformer = model_class(config)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 782, in __init__
    self.text_model = CLIPTextTransformer(config)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 699, in __init__
    self.embeddings = CLIPTextEmbeddings(config)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\transformers\models\clip\modeling_clip.py", line 210, in __init__
    self.position_embedding = nn.Embedding(config.max_position_embeddings, embed_dim)
  File "Y:\Documents\AI stuff\python_embeded\lib\site-packages\torch\nn\modules\sparse.py", line 142, in __init__
    self.weight = Parameter(torch.empty((num_embeddings, embedding_dim), **factory_kwargs),
RuntimeError: [enforce fail at alloc_cpu.cpp:80] data. DefaultCPUAllocator: not enough memory: you tried to allocate 236544 bytes.

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 10, 2023

< 8GB VRAM card users => see
#1240 (comment)

@AntonDVP
Copy link

< 8GB VRAM card users => see #1240 (comment)

Thank you! It's a pity that everything is so bad for those with less than 8GB. By the way, would it be better to make some kind of hardware detection and immediately warn the user about any issues? Otherwise, users panic and think something is poorly installed, working badly, or misconfigured. And users start running around in circles, being entirely perplexed, and tormenting you with the same type of questions.

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 10, 2023

The mode is already switched to LOWVRAM if (too) little VRAM is detected, if you are referring to that. It still might work with lower specs though, but with significantly longer generation time.
We have two possibilities to check VRAM availability

  1. in launch.py
  2. in the backend

Afaik the backend is basically Comfy and is kept somehow in sync, so an additional output could be better implemented in option 1.
I'm not a maintainer and can't decide this but i support your proposal.
Feel free to create a PR 👍

@Ragnarok700
Copy link

The mode is already switched to LOWVRAM if (too) little VRAM is detected, if you are referring to that. It still might work with lower specs though, but with significantly longer generation time. We have two possibilities to check VRAM availability

1. in launch.py

2. in the backend

Afaik the backend is basically Comfy and is kept somehow in sync, so an additional output could be better implemented in option 1. I'm not a maintainer and can't decide this but i support your proposal. Feel free to create a PR 👍

As soon as your VRAM is fully used the system memory will be used, which is significantly slower (talking about 80x slower, will only be used when Resizeable BAR is supported). Your 1650 TI only has 4GB of VRAM, which means that more than half of the needed capacity is moved to RAM. For SDXL it's recommended to use at least a GPU with 8GB of VRAM to keep the model on the GPU memory at the time of generation. Using --lowvram might only help so much and is enabled by default, if Fooocus detects that you have too little VRAM for normal mode, but still might not be enough for you to be able to reasonably use Fooocus.

Tl;DR;: Any options / paths to move forward or get this working (slower) with my current hardware, then or am I just shit out of luck and that's it?

I did check the other comment you made ( #1240 (comment) ), and fair enough about needing more VRAM and all that... but what happens with the LOWVRAM mode, then? I noticed that in my logs before it just paused itself/didn't connect to the interface anymore.

I'm assuming the other log I got when using chrome finally got me more info (i.e.: something about RAM issues)... but I just find it weird that the script simply pauses (most of my attempts with any browser) vs giving us any kind of feedback on why it's pausing.

So, any path for us with 4 GB of VRAM, then (and this issue of the script pausing/crashing with no errors)? I mean, I don't mind if it takes forever to render an image, but I can't even try it at all right now whatsoever, so that just feels bad... :(

@mashb1t
Copy link
Collaborator

mashb1t commented Dec 10, 2023

@Ragnarok700 @uxvisionpro

As it's working for some community members i assume that generation of images is in general possible for supported GPUs with 4GB VRAM. I sadly don't have a GPU available with these specs. Can you please check if your UEFI and GPU/Mainboard supports Resizeable BAR or SAM (Smart Access Memory) and post your results?

Steps:

  1. Check if Resizeable BAR is supported for your GPU / Mainboard
  2. Enable in UEFI/BIOS or update UEFI/BIOS to version with support if available
  3. Verify in NVIDIA Control Panel

Links:
=>https://nvidia.custhelp.com/app/answers/detail/a_id/5165
https://www.asus.com/support/FAQ/1046107/
https://www.intel.com/content/www/us/en/support/articles/000090831/graphics.html

@Ragnarok700
Copy link

@Ragnarok700 @uxvisionpro

As it's working for some community members i assume that generation of images is in general possible for supported GPUs with 4GB VRAM. I sadly don't have a GPU available with these specs. Can you please check if your UEFI and GPU/Mainboard supports Resizeable BAR or SAM (Smart Access Memory) and post your results?

Steps:

1. Check if Resizeable BAR is supported for your GPU / Mainboard

2. Enable in UEFI/BIOS or update UEFI/BIOS to version with support if available

3. Verify in NVIDIA Control Panel

Links: =>https://nvidia.custhelp.com/app/answers/detail/a_id/5165 https://www.asus.com/support/FAQ/1046107/ https://www.intel.com/content/www/us/en/support/articles/000090831/graphics.html

Just got back home (23h EST), so I'll check that tomorrow and edit this post once I have the info ;)
Thanks for the assistance @mashb1t , btw! Much appreciated!

@Ragnarok700
Copy link

@Ragnarok700 @uxvisionpro

As it's working for some community members i assume that generation of images is in general possible for supported GPUs with 4GB VRAM. I sadly don't have a GPU available with these specs. Can you please check if your UEFI and GPU/Mainboard supports Resizeable BAR or SAM (Smart Access Memory) and post your results?

Steps:

1. Check if Resizeable BAR is supported for your GPU / Mainboard

2. Enable in UEFI/BIOS or update UEFI/BIOS to version with support if available

3. Verify in NVIDIA Control Panel

Links: =>https://nvidia.custhelp.com/app/answers/detail/a_id/5165 https://www.asus.com/support/FAQ/1046107/ https://www.intel.com/content/www/us/en/support/articles/000090831/graphics.html

My NVIDIA GeForce GTX 970 does not seem to support resizable bar. Nor does my motherboard seem to do so either (Asus Prime B250M-A)... :| :(

@lllyasviel
Copy link
Owner

https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md

@Ragnarok700
Copy link

I did try the troubleshooting step related to the swap file maximum size... I was surprised when it did the trick, though. That being said, it was (as expected) quite slow to generate a single image.

@mashb1t mashb1t closed this as completed Dec 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants