Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot handle this data type: (1, 1, 3), <f4 #3539

Open
Maveyyl opened this issue May 21, 2024 · 21 comments · Fixed by dnswd/ComfyUI#1 · May be fixed by #6300
Open

Cannot handle this data type: (1, 1, 3), <f4 #3539

Maveyyl opened this issue May 21, 2024 · 21 comments · Fixed by dnswd/ComfyUI#1 · May be fixed by #6300
Labels
Bug Something is confirmed to not be working properly.

Comments

@Maveyyl
Copy link

Maveyyl commented May 21, 2024

Hi,

After 2 days without using, I updated comfyUI and now I get this error when I try to sample anything, seemingly happens when it tries to show a preview:

!!! Exception during processing!!! Cannot handle this data type: (1, 1, 3), <f4
Traceback (most recent call last):
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3130, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 313, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 761, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 663, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 650, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 629, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 534, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 585, in sample_dpmpp_2m
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\comfy\samplers.py", line 532, in
k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\latent_preview.py", line 94, in callback
preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\latent_preview.py", line 18, in decode_latent_to_preview_image
preview_image = self.decode_latent_to_preview(x0)
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\latent_preview.py", line 48, in decode_latent_to_preview
return Image.fromarray(latents_ubyte.numpy())
File "C:\Users\maveyyl\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3134, in fromarray
raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 3), <f4

@Maveyyl
Copy link
Author

Maveyyl commented May 21, 2024

After investigating a little the file latent_preview.py, function decode_latent_to_preview, which was modified a few days ago, the values are transformed into [0.0, 255.0], but the dtype stays torch.float32 instead of becoming torch.uint8.

For some reason the "to" method doesn't do the type change unless you do it alone like this:

        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=True)

Not sure if it doesn't beat the purpose though. Hope it helps.

@Maveyyl
Copy link
Author

Maveyyl commented May 22, 2024

The OS fix doesn't work for my windows 11 + AMD CPU + AMD GPU.

@dnswd
Copy link

dnswd commented May 24, 2024

@Maveyyl Thanks for the latent_preview.py clue. I tried yours but the result is blank. Tried using:

        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

and it's working perfectly. I'm not sure why though maybe this issue is AMD specific, but I hope this helps for others.

OS: Windows 10 x86_64
CPU: AMD Ryzen 7 5700X (16) @ 3.393GHz
GPU: AMD Radeon RX 6700 XT

dnswd added a commit to dnswd/ComfyUI that referenced this issue May 24, 2024
@NeedsMoar
Copy link

I could almost guarantee that AMD devices don't support non-blocking anything on Windows (especially not with DirectML).
None of the OpenCL extensions required to do it are there, the only way you'd get something like it is resizable bar enabled but since that's cache coherent I don't think the device itself considers it non-blocking even if the CPU can unless it needs to access it. Knowing the DirectML backend, setting it to true uses the flag anyway but incorrectly and doesn't wait until it finishes when the CPU tries to access it like it should which results in broken images.

@comfyanonymous
Copy link
Owner

If that's the case the right fix is adding a:

    if directml_enabled:
        return False

Here: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L630

@traugdor
Copy link

If that's the case the right fix is adding a:

    if directml_enabled:
        return False

Here: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L630

Can confirm this is working on my Radeon 6600XT

@tisThivas
Copy link

After trying all of the suggested solutions, the only thing that worked was redownload and replace the latent_preview.py for an older one. In my case the one from 11 Mar 2024 was enough. It might not be a solution, but it's workaround for the moment.

@mcmonkey4eva mcmonkey4eva added the Bug Something is confirmed to not be working properly. label Jun 10, 2024
@djdoubt03
Copy link

Having same Issue, I'm using Stability Matrix with ComfyUI. It used to work but I removed a few weeks ago and decide to try again. CPU: AMD Ryzen 5 5600 and GPU: AMD Radeon RX 6700XT.

ERRORS:

G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch_dynamo\external_utils.py:17: UserWarning: Set seed for privateuseone device does not take effect, please add API's _is_in_bad_fork and manual_seed_all to privateuseone device module.
return fn(*args, **kwargs)
Requested to load BaseModel
Loading 1 new model
loading in lowvram mode 64.0
G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py:655: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.)
if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image.
0%| | 0/20 [00:08<?, ?it/s]
!!! Exception during processing!!! Cannot handle this data type: (1, 1, 3), <f4
Traceback (most recent call last):
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3130, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1355, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1325, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 794, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 696, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 683, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 662, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 567, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 140, in sample_euler
callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 565, in
k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 91, in callback
preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 26, in decode_latent_to_preview_image
preview_image = self.decode_latent_to_preview(x0)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 45, in decode_latent_to_preview
return preview_to_image(latent_image)
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 19, in preview_to_image
return Image.fromarray(latents_ubyte.numpy())
File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3134, in fromarray
raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 3), <f4

@traugdor
Copy link

Having same Issue, I'm using Stability Matrix with ComfyUI. It used to work but I removed a few weeks ago and decide to try again. CPU: AMD Ryzen 5 5600 and GPU: AMD Radeon RX 6700XT.

ERRORS:

G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch_dynamo\external_utils.py:17: UserWarning: Set seed for privateuseone device does not take effect, please add API's _is_in_bad_fork and manual_seed_all to privateuseone device module. return fn(*args, **kwargs) Requested to load BaseModel Loading 1 new model loading in lowvram mode 64.0 G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py:655: UserWarning: The operator 'aten::count_nonzero.dim_IntList' is not currently supported on the DML backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at C:__w\1\s\pytorch-directml-plugin\torch_directml\csrc\dml\dml_cpu_fallback.cpp:17.) if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image. 0%| | 0/20 [00:08<?, ?it/s] !!! Exception during processing!!! Cannot handle this data type: (1, 1, 3), <f4 Traceback (most recent call last): File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3130, in fromarray mode, rawmode = _fromarray_typemap[typekey] KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1355, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\nodes.py", line 1325, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 794, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 696, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 683, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 662, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 567, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\k_diffusion\sampling.py", line 140, in sample_euler callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised}) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\comfy\samplers.py", line 565, in k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 91, in callback preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 26, in decode_latent_to_preview_image preview_image = self.decode_latent_to_preview(x0) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 45, in decode_latent_to_preview return preview_to_image(latent_image) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\latent_preview.py", line 19, in preview_to_image return Image.fromarray(latents_ubyte.numpy()) File "G:\SD\StabilityMatrix\Data\Packages\ComfyUI\venv\lib\site-packages\PIL\Image.py", line 3134, in fromarray raise TypeError(msg) from e TypeError: Cannot handle this data type: (1, 1, 3), <f4

Try changing your latent_preview.py file method:

def preview_to_image(latent_image):
        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

        return Image.fromarray(latents_ubyte.numpy())

@gcfrantz2009
Copy link

I initially got the cannot handle data type error, and the fix above, updating the preview_to_image method in latent_preview.py got me passed that, but now I'm getting blank output.

Ryzne 7950x3d/Radeon 7900xtx

@traugdor
Copy link

Sounds like a NaN issue or something else is going on. Can you share a screenshot? I get blank images from time to time which is almost always a driver issue. You can try to reset your GPU with this python script. You must run Python in an administrator window.

import subprocess

def restart_gpu_driver():
    # Define the command to get the Instance ID of the GPU device
    command = 'powershell "Get-PnpDevice | Where-Object { ($_.Class -eq \'Display\' ) -and ($_.Status -eq \'OK\')} | Select-Object -ExpandProperty InstanceId"'
    
    try:
        # Get Instance ID and strip any extra whitespace or newline characters
        instanceID = subprocess.check_output(command, shell=True, text=True).strip()
        print("Running on the selected GPU: " + instanceID)
        
        # Define the commands to disable and enable the GPU device
        disable_command = f'powershell "Disable-PnpDevice -InstanceId \'{instanceID}\' -Confirm:$false"'
        enable_command = f'powershell "Enable-PnpDevice -InstanceId \'{instanceID}\' -Confirm:$false"'
        
        # Disable the GPU device
        subprocess.run(disable_command, shell=True, check=True)
        print("GPU disabled successfully.")
        
        # Enable the GPU device
        subprocess.run(enable_command, shell=True, check=True)
        print("GPU enabled successfully.")
    except subprocess.CalledProcessError as e:
        print(f"An error occurred: {e}")

# Call the function to restart the GPU driver
restart_gpu_driver()

your screen will flash. This script usually fixes any NaN issues I have with my 6600xt.

@ansaus
Copy link

ansaus commented Jun 19, 2024

If that's the case the right fix is adding a:

    if directml_enabled:
        return False

Here: https://github.com/comfyanonymous/ComfyUI/blob/master/comfy/model_management.py#L630

this ones helped in addition with fix in latent_preview.py (probalby even not needed)

@traugdor
Copy link

Forcing uint8 was for AMD devices, specifically

@voyager5874
Copy link

voyager5874 commented Jul 10, 2024

It goes away if preview disabled via manager, suggestions above didn't work for me (RX570 8gb VRAM, 32gb RAM --directml)
https://www.reddit.com/r/StableDiffusion/comments/1cx2sqg/cmfyui_typeerror_cannot_handle_this_data_type_1_1/

@zyzz15620
Copy link

@dnswd Thanks! I tried your and it working, my GPU is AMD rx580

    latents_ubyte = (((latent_image + 1) / 2)
                        .clamp(0, 1)  # change scale from -1..1 to 0..1
                        .mul(0xFF)  # to 0..255
                        )
    latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
    latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

@ZeusArts
Copy link

ZeusArts commented Oct 12, 2024

@dnswd
@zyzz15620
It's work to me too, my Radeon RX 5500 XT Ksampler previewer is working now, problem is fixed. Thank you guys.

@heirofsihgma
Copy link

Hello I am also having a quite similar issue, but

@Maveyyl Thanks for the latent_preview.py clue. I tried yours but the result is blank. Tried using:

        latents_ubyte = (((latent_image + 1) / 2)
                            .clamp(0, 1)  # change scale from -1..1 to 0..1
                            .mul(0xFF)  # to 0..255
                            )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

and it's working perfectly. I'm not sure why though maybe this issue is AMD specific, but I hope this helps for others.

OS: Windows 10 x86_64
CPU: AMD Ryzen 7 5700X (16) @ 3.393GHz
GPU: AMD Radeon RX 6700 XT

I'm having a quite similar issue, but somehow it did not work for me!

My system:

  • Windows 10 64-bit
  • AMD Radeon RTX 7900 (GPU)
  • Over 40 GB of RAM
  • Using ComfyUI in a virtual environment VENV and python
    When using a Custom Node (Efficient Nodes) I get this error when trying a simple prompt:

===============================================================

ComfyUI Error Report

Error Details

  • Node Type: KSampler Adv. (Efficient)
  • Exception Type: TypeError
  • Exception Message: Cannot handle this data type: (1, 1, 3), <f4

Stack Trace

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 2225, in sample_adv
    return super().sample(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 732, in sample
    samples, images, gifs, preview = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler,
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 554, in process_latent_image
    samples = KSamplerAdvanced().sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\nodes.py", line 1471, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\nodes.py", line 1404, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\Sigma\miniconda3\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\k_diffusion\sampling.py", line 175, in sample_euler_ancestral
    callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 598, in <lambda>
    k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 99, in callback
    preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 26, in decode_latent_to_preview_image
    preview_image = self.decode_latent_to_preview(x0)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 53, in decode_latent_to_preview
    return preview_to_image(latent_image)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 19, in preview_to_image
    return Image.fromarray(latents_ubyte.numpy())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "C:\Users\Sigma\miniconda3\Lib\site-packages\PIL\Image.py", line 3134, in fromarray
    raise TypeError(msg) from e

System Information

  • ComfyUI Version: v0.2.3-6-g191a0d5
  • Arguments: main.py --directml
  • OS: nt
  • Python Version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:03:56) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cpu

Devices

  • Name: privateuseone
    • Type: privateuseone
    • VRAM Total: 1073741824
    • VRAM Free: 1073741824
    • Torch VRAM Total: 1073741824
    • Torch VRAM Free: 1073741824

Logs

2024-10-13 15:40:50,512 - root - INFO - Using directml with device: 
2024-10-13 15:40:50,522 - root - INFO - Total VRAM 1024 MB, total RAM 49027 MB
2024-10-13 15:40:50,523 - root - INFO - pytorch version: 2.3.1+cpu
2024-10-13 15:40:50,524 - root - INFO - Set vram state to: NORMAL_VRAM
2024-10-13 15:40:50,524 - root - INFO - Device: privateuseone
2024-10-13 15:40:52,021 - root - INFO - Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-10-13 15:40:54,036 - root - INFO - [Prompt Server] web root: D:\Private\ComfyUI Virtual Environment\ComfyUI\web
2024-10-13 15:40:55,246 - root - INFO - 
Import times for custom nodes:
2024-10-13 15:40:55,246 - root - INFO -    0.0 seconds: D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\websocket_image_save.py
2024-10-13 15:40:55,248 - root - INFO -    0.0 seconds: D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui
2024-10-13 15:40:55,248 - root - INFO -    0.2 seconds: D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\ComfyUI-Manager
2024-10-13 15:40:55,249 - root - INFO - 
2024-10-13 15:40:55,259 - root - INFO - Starting server

2024-10-13 15:40:55,260 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-10-13 15:41:14,768 - root - INFO - got prompt
2024-10-13 15:41:15,114 - root - INFO - model weight dtype torch.float32, manual cast: None
2024-10-13 15:41:15,118 - root - INFO - model_type EPS
2024-10-13 15:41:21,214 - root - INFO - Using split attention in VAE
2024-10-13 15:41:21,216 - root - INFO - Using split attention in VAE
2024-10-13 15:41:21,936 - root - INFO - Requested to load SDXLClipModel
2024-10-13 15:41:21,937 - root - INFO - Loading 1 new model
2024-10-13 15:41:21,959 - root - INFO - loaded completely 0.0 1560.802734375 True
2024-10-13 15:41:23,639 - root - INFO - Requested to load SDXLClipModel
2024-10-13 15:41:23,640 - root - INFO - Loading 1 new model
2024-10-13 15:41:28,314 - root - INFO - Requested to load SDXL
2024-10-13 15:41:28,315 - root - INFO - Loading 1 new model
2024-10-13 15:41:32,510 - root - INFO - loaded completely 0.0 9794.096694946289 True
2024-10-13 15:41:32,994 - root - ERROR - !!! Exception during processing !!! Cannot handle this data type: (1, 1, 3), <f4
2024-10-13 15:41:33,000 - root - ERROR - Traceback (most recent call last):
  File "C:\Users\Sigma\miniconda3\Lib\site-packages\PIL\Image.py", line 3130, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
                    ~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 3), '<f4')

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 2225, in sample_adv
    return super().sample(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 732, in sample
    samples, images, gifs, preview = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler,
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 554, in process_latent_image
    samples = KSamplerAdvanced().sample(model, add_noise, seed, steps, cfg, sampler_name, scheduler,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\nodes.py", line 1471, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\nodes.py", line 1404, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Sigma\miniconda3\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\k_diffusion\sampling.py", line 175, in sample_euler_ancestral
    callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\comfy\samplers.py", line 598, in <lambda>
    k_callback = lambda x: callback(x["i"], x["denoised"], x["x"], total_steps)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 99, in callback
    preview_bytes = previewer.decode_latent_to_preview_image(preview_format, x0)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 26, in decode_latent_to_preview_image
    preview_image = self.decode_latent_to_preview(x0)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 53, in decode_latent_to_preview
    return preview_to_image(latent_image)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Private\ComfyUI Virtual Environment\ComfyUI\latent_preview.py", line 19, in preview_to_image
    return Image.fromarray(latents_ubyte.numpy())
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Sigma\miniconda3\Lib\site-packages\PIL\Image.py", line 3134, in fromarray
    raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 3), <f4

2024-10-13 15:41:33,006 - root - INFO - Prompt executed in 18.23 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":41,"last_link_id":75,"nodes":[{"id":20,"type":"Note","pos":{"0":540,"1":-1013},"size":{"0":646.016357421875,"1":370.7940368652344},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["= EMBEDDINGS =\n\n[Pony]\n\n.POSITIVES\n\nzPDXLrl, zPDXL2\n\n.NEGATIVES\n\nzPDXLrl-neg, zPDXL2-neg, negative_hand, negative_hand-neg\n\n= EMBEDDINGS =\n\n[SDXL 1.0]\n\n.POSITIVES\n\nZipRealism, Zip2D\n\n.NEGATIVES\n\nDeepNegative_xl_v1, worst quality, low quality, logo, text, \nwatermark, username:1, ac_neg1, ac_neg2, ZipRealism_Neg"],"color":"#432","bgcolor":"#653"},{"id":29,"type":"CLIPSetLastLayer","pos":{"0":-1519,"1":161},"size":{"0":315,"1":58},"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":44}],"outputs":[{"name":"CLIP","type":"CLIP","links":[47,52],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPSetLastLayer"},"widgets_values":[-2]},{"id":13,"type":"CLIPSetLastLayer","pos":{"0":-1610,"1":1740},"size":{"0":315,"1":58},"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":26}],"outputs":[{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"CLIPSetLastLayer"},"widgets_values":[-1]},{"id":23,"type":"LoraLoader","pos":{"0":-448,"1":1575},"size":{"0":490.43994140625,"1":126},"flags":{},"order":16,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":31},{"name":"clip","type":"CLIP","link":32}],"outputs":[{"name":"MODEL","type":"MODEL","links":[35],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["Styles_For_Pony_Twilight.safetensors",0.8,1]},{"id":24,"type":"LoraLoader","pos":{"0":-448,"1":1792},"size":{"0":481.14862060546875,"1":126},"flags":{},"order":18,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":35},{"name":"clip","type":"CLIP","link":34}],"outputs":[{"name":"MODEL","type":"MODEL","links":[38],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["ThePit_Style_Pony.safetensors",0.3,1]},{"id":26,"type":"LoraLoader","pos":{"0":-441,"1":2239},"size":{"0":464.44384765625,"1":133.8055419921875},"flags":{},"order":21,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":41},{"name":"clip","type":"CLIP","link":40}],"outputs":[{"name":"MODEL","type":"MODEL","links":[63],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["Nipple_Rings_Pony_XL.safetensors",1,1]},{"id":25,"type":"LoraLoader","pos":{"0":-442,"1":2025},"size":{"0":477.1378173828125,"1":126},"flags":{},"order":20,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":38},{"name":"clip","type":"CLIP","link":37}],"outputs":[{"name":"MODEL","type":"MODEL","links":[41],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["Expressive_H-000001.safetensors",0.25,1]},{"id":14,"type":"CLIPSetLastLayer","pos":{"0":-1610,"1":1580},"size":{"0":315,"1":58},"flags":{},"order":5,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":25}],"outputs":[{"name":"CLIP","type":"CLIP","links":[21,28,37,40,62],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPSetLastLayer"},"widgets_values":[-2]},{"id":5,"type":"EmptyLatentImage","pos":{"0":338,"1":1816},"size":{"0":315,"1":106},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,1]},{"id":33,"type":"LoraLoader","pos":{"0":-441,"1":2442},"size":{"0":451.3920593261719,"1":157.18954467773438},"flags":{},"order":22,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":63},{"name":"clip","type":"CLIP","link":62}],"outputs":[{"name":"MODEL","type":"MODEL","links":[68],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["MeridaBraveXLP_Character-10.safetensors",1,1]},{"id":22,"type":"LoraLoader","pos":{"0":-455,"1":1346},"size":{"0":511.34735107421875,"1":126},"flags":{},"order":13,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":29},{"name":"clip","type":"CLIP","link":28}],"outputs":[{"name":"MODEL","type":"MODEL","links":[31],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["detail-add-xl.safetensors",2.5,1]},{"id":19,"type":"LoraLoader","pos":{"0":-451,"1":1090},"size":{"0":524.8563842773438,"1":163.40432739257812},"flags":{},"order":9,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":27},{"name":"clip","type":"CLIP","link":21}],"outputs":[{"name":"MODEL","type":"MODEL","links":[29],"slot_index":0,"shape":3},{"name":"CLIP","type":"CLIP","links":null,"shape":3}],"properties":{"Node name for S&R":"LoraLoader"},"widgets_values":["EnvyPonyPrettyEyes01.safetensors",1,1.32]},{"id":30,"type":"CLIPTextEncode","pos":{"0":-450,"1":110},"size":{"0":671.37548828125,"1":337.3836975097656},"flags":{},"order":11,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":52}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[53],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["score_9, score_8_up, score_7_up, The woman's face should be gorgeous, and the skin should be ultra detailed. She should have a deep cleavage, (((massive sagging breasts))), (((dark areola))), partially exposed and cutout from the clothing. The image should be in a vertical layout and the camera settings should be set to 8k for ultra high resolution. hyper-realistic, highest quality, masterpiece, huge thick nipple rings, perfecteyes, expressiveh, GothelXLP, big hair, curly hair, blue eyes, crown, makeup, (nr, nipple rings), sex slave, naked, standing up, full body shot, masterpiece, ((high detailed eye iris)), perfect eye iris, ((wide hips)), (((huge pregnant belly))), (((pregnant))), (((overdue pregnancy))), (((thick golden navel ring piercing))), sexy gaze, sexy expression, sensual gaze, MeridaXLP, freckles, wavy hair, ginger hair, ginger, zPDXL2"]},{"id":7,"type":"CLIPTextEncode","pos":{"0":-460,"1":691},"size":{"0":651.00537109375,"1":240.86366271972656},"flags":{},"order":10,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":47}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["score_6, score_5, score_4, pony, gaping, censored, furry, child, kid, chibi, 3d, photo, monochrome, elven ears, anime, multiple cocks, extra legs, extra hands, mutated legs, mutated hands, closed eyes, bad nipples, weird nipples, double nipple, covered breasts, covered nipples, low detailed iris, weird eyes, weird expression, surprised expression, startled face, skin artifacts, weird skin, cracked skin, segmented skin, fragmented skin, scaly skin, ((double navel)), extra leg, three legs, zPDXL2-neg, negative_hand, negative_hand-neg, ng_deepnegative_v1_75t, pony_negativeV2, boring_sdxl_v1, easynegative"]},{"id":3,"type":"KSampler","pos":{"0":1010,"1":1820},"size":{"0":315,"1":262},"flags":{},"order":23,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":68},{"name":"positive","type":"CONDITIONING","link":53},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","type":"LATENT","links":[],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[302727281332961,"randomize",30,7,"dpmpp_2m","karras",1]},{"id":4,"type":"CheckpointLoaderSimple","pos":{"0":-2638,"1":770},"size":{"0":441.39984130859375,"1":164.43212890625},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[27],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[25,26,32,34,44],"slot_index":1},{"name":"VAE","type":"VAE","links":[],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["ponyDiffusionV6XL_v6StartWithThisOne.safetensors"]},{"id":39,"type":"Efficient Loader","pos":{"0":809,"1":-456},"size":{"0":615.7736206054688,"1":581.9341430664062},"flags":{},"order":3,"mode":0,"inputs":[{"name":"lora_stack","type":"LORA_STACK","link":null,"shape":7},{"name":"cnet_stack","type":"CONTROL_NET_STACK","link":null,"shape":7}],"outputs":[{"name":"MODEL","type":"MODEL","links":[73],"slot_index":0},{"name":"CONDITIONING+","type":"CONDITIONING","links":[72],"slot_index":1},{"name":"CONDITIONING-","type":"CONDITIONING","links":[71],"slot_index":2},{"name":"LATENT","type":"LATENT","links":[70],"slot_index":3},{"name":"VAE","type":"VAE","links":[69],"slot_index":4},{"name":"CLIP","type":"CLIP","links":null},{"name":"DEPENDENCIES","type":"DEPENDENCIES","links":null}],"properties":{"Node name for S&R":"Efficient Loader"},"widgets_values":["ponyDiffusionV6XL_v6StartWithThisOne.safetensors","Baked VAE",-1,"None",1,1,"rainbow dash","bad prompts, low quality, bad, horrible art, awful art, amateur art","none","comfy",512,512,1],"color":"#222233","bgcolor":"#333355","shape":"box"},{"id":11,"type":"ImageUpscaleWithModel","pos":{"0":2740,"1":160},"size":{"0":241.79998779296875,"1":46},"flags":{},"order":15,"mode":0,"inputs":[{"name":"upscale_model","type":"UPSCALE_MODEL","link":10},{"name":"image","type":"IMAGE","link":11}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[23],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"ImageUpscaleWithModel"},"widgets_values":[]},{"id":21,"type":"ImageScaleBy","pos":{"0":3140,"1":190},"size":{"0":315,"1":82},"flags":{"collapsed":false},"order":17,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":23}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[24],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"ImageScaleBy"},"widgets_values":["nearest-exact",2]},{"id":12,"type":"SaveImage","pos":{"0":4140,"1":190},"size":{"0":315,"1":270},"flags":{},"order":19,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":24}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":9,"type":"SaveImage","pos":{"0":3190,"1":-340},"size":{"0":210,"1":270},"flags":{},"order":14,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":8,"type":"VAEDecode","pos":{"0":2370,"1":-50},"size":{"0":210,"1":46},"flags":{},"order":12,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":74},{"name":"vae","type":"VAE","link":75}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9,11],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":40,"type":"KSampler Adv. (Efficient)","pos":{"0":1596,"1":-411},"size":{"0":330,"1":422},"flags":{},"order":8,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":73},{"name":"positive","type":"CONDITIONING","link":72},{"name":"negative","type":"CONDITIONING","link":71},{"name":"latent_image","type":"LATENT","link":70},{"name":"optional_vae","type":"VAE","link":69,"shape":7},{"name":"script","type":"SCRIPT","link":null,"shape":7}],"outputs":[{"name":"MODEL","type":"MODEL","links":null},{"name":"CONDITIONING+","type":"CONDITIONING","links":null},{"name":"CONDITIONING-","type":"CONDITIONING","links":null},{"name":"LATENT","type":"LATENT","links":[74],"slot_index":3},{"name":"VAE","type":"VAE","links":[75],"slot_index":4},{"name":"IMAGE","type":"IMAGE","links":null}],"properties":{"Node name for S&R":"KSampler Adv. (Efficient)"},"widgets_values":["enable",234726052613130,"randomize",20,7,"euler_ancestral","normal",0,10000,"disable","auto","true"],"color":"#332222","bgcolor":"#553333","shape":"box"},{"id":10,"type":"UpscaleModelLoader","pos":{"0":2185,"1":201},"size":{"0":315,"1":58},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"UPSCALE_MODEL","type":"UPSCALE_MODEL","links":[10],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"UpscaleModelLoader"},"widgets_values":["4xUltrasharp_4xUltrasharpV10.pth"]}],"links":[[2,5,0,3,3,"LATENT"],[6,7,0,3,2,"CONDITIONING"],[9,8,0,9,0,"IMAGE"],[10,10,0,11,0,"UPSCALE_MODEL"],[11,8,0,11,1,"IMAGE"],[21,14,0,19,1,"CLIP"],[23,11,0,21,0,"IMAGE"],[24,21,0,12,0,"IMAGE"],[25,4,1,14,0,"CLIP"],[26,4,1,13,0,"CLIP"],[27,4,0,19,0,"MODEL"],[28,14,0,22,1,"CLIP"],[29,19,0,22,0,"MODEL"],[31,22,0,23,0,"MODEL"],[32,4,1,23,1,"CLIP"],[34,4,1,24,1,"CLIP"],[35,23,0,24,0,"MODEL"],[37,14,0,25,1,"CLIP"],[38,24,0,25,0,"MODEL"],[40,14,0,26,1,"CLIP"],[41,25,0,26,0,"MODEL"],[44,4,1,29,0,"CLIP"],[47,29,0,7,0,"CLIP"],[52,29,0,30,0,"CLIP"],[53,30,0,3,1,"CONDITIONING"],[62,14,0,33,1,"CLIP"],[63,26,0,33,0,"MODEL"],[68,33,0,3,0,"MODEL"],[69,39,4,40,4,"VAE"],[70,39,3,40,3,"LATENT"],[71,39,2,40,2,"CONDITIONING"],[72,39,1,40,1,"CONDITIONING"],[73,39,0,40,0,"MODEL"],[74,40,3,8,0,"LATENT"],[75,40,4,8,1,"VAE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.8264462809917361,"offset":[-428.83336378983165,589.5737936854584]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

==================================================================================

Any ideas of what I can do to fix this?

@traugdor
Copy link

traugdor commented Nov 1, 2024

it's the same issue as before.
your error is coming from here:

def preview_to_image(latent_image):

Change the preview_to_image method in the latent_preview.py file to this:

def preview_to_image(latent_image):
        latents_ubyte = (((latent_image + 1) / 2)
                    .clamp(0, 1)  # change scale from -1..1 to 0..1
                    .mul(0xFF)  # to 0..255
                    )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

        return Image.fromarray(latents_ubyte.numpy())

For some reason our AMD gpus don't like the way it's coded. Converting it to an 8-bit unsigned int before dumping it on the CPU (if necessary) fixes it. Make sure you have the latest version of ComfyUI before editing the code.

@tralalala2
Copy link

tralalala2 commented Nov 6, 2024

it's the same issue as before. your error is coming from here:

def preview_to_image(latent_image):

Change the preview_to_image method in the latent_preview.py file to this:

def preview_to_image(latent_image):
        latents_ubyte = (((latent_image + 1) / 2)
                    .clamp(0, 1)  # change scale from -1..1 to 0..1
                    .mul(0xFF)  # to 0..255
                    )
        latents_ubyte = latents_ubyte.to(dtype=torch.uint8)
        latents_ubyte = latents_ubyte.to(device="cpu", dtype=torch.uint8, non_blocking=comfy.model_management.device_supports_non_blocking(latent_image.device))

        return Image.fromarray(latents_ubyte.numpy())

For some reason our AMD gpus don't like the way it's coded. Converting it to an 8-bit unsigned int before dumping it on the CPU (if necessary) fixes it. Make sure you have the latest version of ComfyUI before editing the code.

This one finally worked for me too!

@larinius
Copy link

Had perfectly working old install of Comfy UI. But naively pressed - Update ALL.
Now got non working K-Sampler with error:
KeyError: ((1, 1, 3), '<f4')
After 6 month - passed after the first post, nothing fixed and users must manually fix this shit code made by "professionals". Pathetic..

@Nico-Hard
Copy link

Had perfectly working old install of Comfy UI. But naively pressed - Update ALL. Now got non working K-Sampler with error: KeyError: ((1, 1, 3), '<f4') After 6 month - passed after the first post, nothing fixed and users must manually fix this shit code made by "professionals". Pathetic..

Totally agree...
This is even worst for me as is lead to BSOD !
Thanks to @traugdor for the good code :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something is confirmed to not be working properly.
Projects
None yet