You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(magicpose) root@autodl-container-98884cbd0f-2e382a5b:~/autodl-tmp# bash scripts/inference_any_image_pose.sh
TORCH_VERSION=1 FP16_DTYPE=torch.float16
Customize text prompt: None
Namespace(model_config='model_lib/ControlNet/models/cldm_v15_reference_only_pose.yaml', reinit_hint_block=False, image_size=64, empty_text_prob=0.1, sd_locked=True, only_mid_control=False, finetune_all=False, finetune_imagecond_unet=False, control_type=['body+hand+face'], control_dropout=0.0, depth_bg_threshold=0.0, inpaint_unet=False, blank_mask_prob=0.0, mask_densepose=0.0, control_mode='controlnet_important', wonoise=True, mask_bg=False, img_bin_limit='all', num_workers=1, train_batch_size=1, val_batch_size=1, lr=1e-05, lr_sd=1e-05, weight_decay=0, lr_anneal_steps=0, ema_rate=0, num_train_steps=1, grad_clip_norm=0.5, gradient_accumulation_steps=1, seed=42, logging_steps=100, logging_gen_steps=1000, save_steps=10000, save_total_limit=100, use_fp16=True, global_step=0, load_optimizer_state=True, compile=False, with_text=False, pose_transfer=False, eta=0.0, autoreg=False, gif_time=0.03, text_prompt=None, v4=True, train_dataset='tiktok_video_arnold', output_dir=None, local_log_dir='./tiktok_test_log/tb_log/181020/001/log', local_image_dir='./tiktok_test_log/image_log/181020/001/image', resume_dir=None, image_pretrain_dir='./pretrained_weights/model_state-110000.th', pose_pretrain_dir=None, init_path='/home/dchang/MagicDance/jefu/code/model_lib/ControlNet/pretrained_weights/control_sd15_ini.ckpt', local_cond_image_path='./example_data/image/out-of-domain/181020.png', local_pose_path='./example_data/pose_sequence/001', world_size=1, local_rank=0, rank=0, device=device(type='cuda', index=0))
ControlLDMReferenceOnlyPose: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Traceback (most recent call last):
File "/root/autodl-tmp/test_any_image_pose.py", line 577, in <module>
main(args)
File "/root/autodl-tmp/test_any_image_pose.py", line 354, in main
model = create_model(args.model_config).cpu()
File "/root/autodl-tmp/model_lib/ControlNet/cldm/model.py", line 26, in create_model
model = instantiate_from_config(config.model).cpu()
File "/root/autodl-tmp/model_lib/ControlNet/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/root/autodl-tmp/model_lib/ControlNet/cldm/cldm.py", line 1090, in __init__
super().__init__(*args, **kwargs)
File "/root/autodl-tmp/model_lib/ControlNet/ldm/models/diffusion/ddpm.py", line 1845, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "/root/autodl-tmp/model_lib/ControlNet/ldm/models/diffusion/ddpm.py", line 1912, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/root/autodl-tmp/model_lib/ControlNet/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/root/autodl-tmp/model_lib/ControlNet/ldm/modules/encoders/modules.py", line 99, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/miniconda3/envs/magicpose/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1759, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3233) of binary: /root/miniconda3/envs/magicpose/bin/python
Traceback (most recent call last):
File "/root/miniconda3/envs/magicpose/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.13.1', 'console_scripts', 'torchrun')())
File "/root/miniconda3/envs/magicpose/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/envs/magicpose/lib/python3.9/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/root/miniconda3/envs/magicpose/lib/python3.9/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/root/miniconda3/envs/magicpose/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/magicpose/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
test_any_image_pose.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-11-29_10:14:57
host : autodl-container-98884cbd0f-2e382a5b
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 3233)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
The text was updated successfully, but these errors were encountered:
The text was updated successfully, but these errors were encountered: