Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: A tensor with all NaNs was produced in Unet #10343

Closed
1 task
HarmonyHana opened this issue May 13, 2023 · 10 comments
Closed
1 task

[Bug]: A tensor with all NaNs was produced in Unet #10343

HarmonyHana opened this issue May 13, 2023 · 10 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@HarmonyHana
Copy link

HarmonyHana commented May 13, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

i was using stable diffusion as i always have for atleast a month now, and today it decided to spit out the error as listed above.
ive tried everything i can get my hands on, --xformers, --nohalf, etc etc etc, to no effect, i also tried using the command that disables the check, and i just get a black image.
i also tried installing python again to no effect, this happens irregardless of model used, the only model that actually functions is the starter sd-v1.4, no other checkpoints work.

Steps to reproduce the problem

i have no idea what the cause of the issue is, so sadly i cant answer this question

What should have happened?

should generate images as normal

Commit where the problem happens

22bcc7b

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--autolaunch --medvram --xformers
before this issue, i had no command line arguments.

List of extensions

image

Console logs

venv "C:\New folder\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Installing sd-dynamic-prompts requirements.txt


#######################################################################################################
Initializing Civitai Link
If submitting an issue on github, please provide the below text for debugging purposes:

Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Civitai Link revision: d0e83e7b048f9a6472b4964fa530f8da754aba58
SD-WebUI revision: 22bcc7be428c94e9408f589966c2040187245d81

Checking Civitai Link requirements...
[!] python-socketio[client] version 5.7.2 NOT installed.

#######################################################################################################

Launching Web UI with arguments: --autolaunch --xformers --medvram
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: C:\New folder\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json
Civitai Helper: No setting file, use default
Civitai: API loaded
Loading weights [01ffa0eab4] from C:\New folder\stable-diffusion-webui\models\Stable-diffusion\botw_style_offset.safetensors
Creating model from config: C:\New folder\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(1): bad_prompt_version2
Model loaded in 8.8s (load weights from disk: 0.8s, create model: 7.4s, apply half(): 0.6s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Civitai: Check resources for missing preview images
Startup time: 23.8s (import torch: 3.9s, import gradio: 3.4s, import ldm: 1.4s, other imports: 3.2s, list SD models: 0.2s, setup codeformer: 0.2s, load scripts: 1.3s, load SD checkpoint: 8.8s, create ui: 0.9s, gradio launch: 0.3s).
Civitai: Found 18 resources missing preview images
Civitai: Found 13 hash matches
Civitai: Updated 0 preview images
  0%|                                                                                           | 0/20 [00:05<?, ?it/s]
Error completing request
Arguments: ('task(73in11c7k15y1cu)', 'masterpiece, best_quality, 1girl, solo, princess zelda, nintendo, the legend of zelda, botw, totk, short hair, capelet, galaxy(1.543),Night Sky, Dress, colorful night sky ,<lora:zelda_botw_v1:1>', '(bad_prompt:0.8), lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 432, 768, False, 0.7, 2.5, 'Latent', 0, 0, 0, [], 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
  File "C:\New folder\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "C:\New folder\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\New folder\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "C:\New folder\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "C:\New folder\stable-diffusion-webui\modules\processing.py", line 653, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "C:\New folder\stable-diffusion-webui\modules\processing.py", line 869, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\New folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\New folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in launch_sampling
    return func()
  File "C:\New folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 358, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\New folder\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "C:\New folder\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\New folder\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "C:\New folder\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 152, in forward
    devices.test_for_nans(x_out, "unet")
  File "C:\New folder\stable-diffusion-webui\modules\devices.py", line 152, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Additional information

i am not super github or ai generation literate, so if i missed something please let me know

@HarmonyHana HarmonyHana added the bug-report Report of a bug, yet to be confirmed label May 13, 2023
@Sakura-Luna Sakura-Luna changed the title [Bug]: [Bug]: A tensor with all NaNs was produced in Unet May 14, 2023
@HarmonyHana
Copy link
Author

small update, i tried again today, completely reinstalled stable diffusion on a different drive alltogether, simply copied my models, lora's and embeddings but the install itself was pulled from git.
it didnt have the issue for the first few hours, but after i had restarted the webui once, it started the same thing. the images i produced before the issue started happening were perfectly fine, which makes me believe this has to be like a false positive or something

@Lalimec
Copy link

Lalimec commented May 15, 2023

not UNET but i got the VAE version.

img2img:  sam yang (master piece, best quality,:1.2), 1man, anime of (ryan gosling:1.1) standing in front of a building with a car behind him and a building in the background with a clock tower, Artgerm, anime art style,  anime key visual, <lora:samdoesartsSamYang_offsetRightFilesize:1>
2023-05-15 18:10:38,295 - dynamic_prompting.py [line:476] - INFO: Prompt matrix will create 1 images in a total of 1 batches.
Loading model from cache: control_v11p_sd15_openpose [cab727d4]
Loading preprocessor: openpose_face
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 1823
raw_W = 1421
target_H = 1024
target_W = 704
estimation = 798.1919912232584
preprocessor resolution = 798
locon load lora method
100%|███████████████████████████████████████████| 38/38 [00:19<00:00,  1.90it/s]
Error completing request████████████████████████| 38/38 [00:19<00:00,  1.95it/s]
Arguments: ('task(zwcbo5nbpdb2nz9)', 0, ' sam yang (master piece, best quality,:1.2), 1man, anime of (ryan gosling:1.1) standing in front of a building with a car behind him and a building in the background with a clock tower, Artgerm, anime art style,  anime key visual, <lora:samdoesartsSamYang_offsetRightFilesize:1>', 'teeth, EasyNegative, 3d, 3dcg, doll, lowres, bad anatomy, wrong anatomy, mutated hands and fingers, mutation, mutated, amputation, naked, nsfw, logo, watermark, text', [], <PIL.Image.Image image mode=RGBA size=1421x1823 at 0x7F5C9C334640>, None, None, None, None, None, None, 38, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.5, -1.0, -1.0, 0, 0, 0, False, 0, 1024, 704, 1, 0, 0, 32, 0, '', '', '', [], 0, False, 'img2img', False, '', '', False, 'Euler a', False, '_general/realisticVisionV20_v20.safetensors [e6415c4892]', True, 0.5, True, 4, True, 32, True, False, 30, False, 6, False, 512, 512, '', False, 1, 'Both ▦', False, '', False, False, True, False, False, False, False, 100, 100, False, '', '', '', 'generateMasksTab', 4, 4, 2.5, 30, 1.03, 1, 1, 5, 0.5, 5, False, True, False, 20, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <controlnet.py.UiControlNetUnit object at 0x7f5c9c334130>, <controlnet.py.UiControlNetUnit object at 0x7f5c9c335270>, <controlnet.py.UiControlNetUnit object at 0x7f5b5b527ac0>, False, '', 0.5, True, False, '', 'Lerp', False, False, False, 'Horizontal', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, False, 3, 0, False, False, 0, False, 0, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, '', 0, 0, 384, 384, False, False, True, True, True, 1, '<ul>\n<li><code>CFG Scale</code> should be 2 or lower.</li>\n</ul>\n', True, True, '', '', True, 50, True, 1, 0, False, None, '', 0.2, 0.1, 1, 1, False, True, True, False, False, False, False, 4, 0.5, 'Linear', 'None', 4, 0.09, True, 1, 0, 7, False, False, 'None', None, 1, 'None', False, False, 'PreviousFrame', 'Show/Hide AlphaCanvas', 384, 'Update Outpainting Size', 8, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], '', 1, True, 100, False, False, 'positive', 'comma', 0, False, False, '', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, False, None, False, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, 0, 0, 384, 384, False, True, False, False, 0, 1, False, 1, True, True, False, False, ['left-right', 'red-cyan-anaglyph'], 2.5, 'polylines_sharp', 0, False, False, False, False, False, False, 'u2net', False, True, False, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}
Traceback (most recent call last):
  File "/home/ubuntu/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/home/ubuntu/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/ubuntu/stable-diffusion-webui/modules/img2img.py", line 181, in img2img
    processed = process_images(p)
  File "/home/ubuntu/stable-diffusion-webui/modules/processing.py", line 515, in process_images
    res = process_images_inner(p)
  File "/home/ubuntu/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/home/ubuntu/stable-diffusion-webui/modules/processing.py", line 673, in process_images_inner
    devices.test_for_nans(x, "vae")
  File "/home/ubuntu/stable-diffusion-webui/modules/devices.py", line 156, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in VAE. This could be because there's not enough precision to represent the picture. Try adding --no-half-vae commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

@thomasaum
Copy link

same issue here. only the sd-v1.4 work.

@xxaier
Copy link

xxaier commented May 24, 2023

same error

Traceback (most recent call last):
  File "/Users/z/git/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/Users/z/git/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/Users/z/git/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/Users/z/git/stable-diffusion-webui/modules/processing.py", line 526, in process_images
    res = process_images_inner(p)
  File "/Users/z/git/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "/Users/z/git/stable-diffusion-webui/modules/processing.py", line 680, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/Users/z/git/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 252, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "/Users/z/git/stable-diffusion-webui/modules/processing.py", line 907, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/Users/z/git/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/Users/z/git/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 251, in launch_sampling
    return func()
  File "/Users/z/git/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 377, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/Users/z/git/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/z/git/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 553, in sample_dpmpp_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/Users/z/git/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/z/git/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 167, in forward
    devices.test_for_nans(x_out, "unet")
  File "/Users/z/git/stable-diffusion-webui/modules/devices.py", line 156, in test_for_nans
    raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because
 your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half comman
dline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

@xxaier
Copy link

xxaier commented May 24, 2023

I found because this plugin Bing-su/adetailer#78 + sd-webui-controlnet will cause this my error

@PROuserR
Copy link

Having the same issue here.It is so frustrating.I posted the error on Reddit If anyone could help pls:
https://www.reddit.com/r/StableDiffusion/comments/13ta32j/nansexception_a_tensor_with_all_nans_was_produced/

@fiskbil
Copy link

fiskbil commented Jun 7, 2023

Had the same issue and it went away after downgrading Nvidia driver to 531.79. Possibly it's related to the recent memory management changes Nvidia did: #11063

@Stan-Stani
Copy link

Had the same issue and it went away after downgrading Nvidia driver to 531.79. Possibly it's related to the recent memory management changes Nvidia did: #11063

I tried downgrading to that version and I still have the issue. Every once in a while it'll start working, but inconsistently.

@2blackbar
Copy link

Fix that works 100% of the time
#12292

@catboxanon
Copy link
Collaborator

Duplicate of #6923

@catboxanon catboxanon marked this as a duplicate of #6923 Aug 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

9 participants