Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please help #16

Closed
KillyTheNetTerminal opened this issue Dec 14, 2024 · 13 comments
Closed

Please help #16

KillyTheNetTerminal opened this issue Dec 14, 2024 · 13 comments

Comments

@KillyTheNetTerminal
Copy link

image
Traceback (most recent call last):
File "D:\CondaENV\Trelliz\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "D:\CondaENV\Trelliz\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "D:\CondaENV\Trelliz\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '
', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean'.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2037, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill_init
.py", line 20, in
from .magic_quill import MagicQuill
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 104, in
class MagicQuill(object):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 106, in MagicQuill
llavaModel = LLaVAModel()
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\llava_new.py", line 29, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\LLaVA\llava\model\builder.py", line 121, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "D:\CondaENV\Trelliz\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "D:\CondaENV\Trelliz\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "D:\CondaENV\Trelliz\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

Cannot import E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill module for custom nodes: Incorrect path_or_model_id: 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

@zliucz
Copy link
Collaborator

zliucz commented Dec 15, 2024

Hi. Please check if the llava-v1.5-7b-finetune-clean checkpoints have been correctly placed at your path E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean. Thanks.

@zliucz zliucz closed this as completed Dec 15, 2024
@KillyTheNetTerminal
Copy link
Author

Thank you very much for answering me. Where can I get that Checkpoint? I'm a little lost, sorry

@zliucz
Copy link
Collaborator

zliucz commented Dec 16, 2024

Hi, here are the checkpoints. Thanks.

@KillyTheNetTerminal
Copy link
Author

I don't know where I should put the files or how I should do it :(

@zliucz
Copy link
Collaborator

zliucz commented Dec 16, 2024

Just merge the downloaded folder with your /models folder at ComfyUI. Thanks.

@KillyTheNetTerminal
Copy link
Author

image
This whole folder inside of models folder? not inside MagicQuill Custom Nodes folder, right?

@zliucz
Copy link
Collaborator

zliucz commented Dec 16, 2024

Yes. You are correct ;)

@KillyTheNetTerminal
Copy link
Author

image
image
Where I should put others folders?

@zliucz
Copy link
Collaborator

zliucz commented Dec 16, 2024

the same way

@KillyTheNetTerminal
Copy link
Author

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
['E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python312.zip', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', '../..', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy_extras']
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2037, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 995, in exec_module
File "", line 488, in call_with_frames_removed
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill_init
.py", line 20, in
from .magic_quill import MagicQuill
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 14, in
from .scribble_color_edit import ScribbleColorEditModel
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\scribble_color_edit.py", line 13, in
from comfyui_controlnet_aux.node_wrappers.lineart import LineArt_Preprocessor
ModuleNotFoundError: No module named 'comfyui_controlnet_aux'

Cannot import E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill module for custom nodes: No module named 'comfyui_controlnet_aux'

Import times for custom nodes:
0.0 seconds: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds (IMPORT FAILED): E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill
0.2 seconds: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
0.4 seconds: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_BrushNet

Okay I disable the other custom nodes to see it crearly, this is the log now

@KillyTheNetTerminal
Copy link
Author

[START] Security scan
[DONE] Security scan

ComfyUI-Manager: installing dependencies done.

** ComfyUI startup time: 2024-12-16 00:55:18.115834
** Platform: Windows
** Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
** Python executable: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python.exe
** ComfyUI Path: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI
** Log path: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\comfyui.log

Prestartup times for custom nodes:
2.3 seconds: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 4096 MB, total RAM 32666 MB
pytorch version: 2.5.1+cu124
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1650 : native
Using pytorch cross attention
[Prompt Server] web root: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\web

Loading: ComfyUI-Manager (V2.55.4)

ComfyUI Revision: 2890 [9a616b81] *DETACHED | Released on '2024-12-04'

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[comfyui_controlnet_aux] | INFO -> Using ckpts path: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
['E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python312.zip', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', '../..', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy_extras']
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2037, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 995, in exec_module
File "", line 488, in call_with_frames_removed
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill_init
.py", line 20, in
from .magic_quill import MagicQuill
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 104, in
class MagicQuill(object):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 106, in MagicQuill
llavaModel = LLaVAModel()
^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\llava_new.py", line 29, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\LLaVA\llava\model\builder.py", line 122, in load_pretrained_model
model = LlavaLlamaForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 3584, in from_pretrained
raise ValueError(
ValueError: You can't pass load_in_4bitor load_in_8bit as a kwarg when passing quantization_config argument at the same time.

Cannot import E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill module for custom nodes: You can't pass load_in_4bitor load_in_8bit as a kwarg when passing quantization_config argument at the same time.

Sorry, so silly, activating the controlnet custom node, the same errors :(

@zliucz
Copy link
Collaborator

zliucz commented Dec 16, 2024

Check this issues7.

@KillyTheNetTerminal
Copy link
Author

image
E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
['E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\python312.zip', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\glob', '../..', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_controlnet_aux', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes', 'E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\comfy_extras']
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\nodes.py", line 2037, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 995, in exec_module
File "", line 488, in call_with_frames_removed
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill_init
.py", line 20, in
from .magic_quill import MagicQuill
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 104, in
class MagicQuill(object):
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\magic_quill.py", line 106, in MagicQuill
llavaModel = LLaVAModel()
^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\llava_new.py", line 29, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill\LLaVA\llava\model\builder.py", line 121, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 773, in from_pretrained
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 1100, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\configuration_utils.py", line 634, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\configuration_utils.py", line 689, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\utils\hub.py", line 356, in cached_file
raise EnvironmentError(
OSError: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean/None' for available files.

Cannot import E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_MagicQuill module for custom nodes: E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean does not appear to have a file named config.json. Checkout 'https://huggingface.co/E:\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\llava-v1.5-7b-finetune-clean/None' for available files.

Is there a problem with how the custom node is being downloaded in the first place? Am I overlooking something?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants