We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
streamlit run scripts/demo/video_sampling.py --server.port 8861 The error is as follows:
/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin Initialized embedder #0: FrozenOpenCLIPImagePredictionEmbedder with 683800065 params. Trainable: False Initialized embedder #1: ConcatTimestepEmbedderND with 0 params. Trainable: False Initialized embedder #2: ConcatTimestepEmbedderND with 0 params. Trainable: False Initialized embedder #3: VideoPredictionEmbedderWithEncoder with 83653863 params. Trainable: False Initialized embedder #4: ConcatTimestepEmbedderND with 0 params. Trainable: False Loading model from checkpoints/svd_image_decoder.safetensors imgimgimgimgimgimgimgimgimgimgimg None 2023-12-06 17:01:10.503 Uncaught app exception Traceback (most recent call last): File "/root/anaconda3/envs/pt2/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script exec(code, module.dict) File "/root/generative-models/scripts/demo/video_sampling.py", line 144, in value_dict["cond_frames"] = img + cond_aug * torch.randn_like(img) TypeError: randn_like(): argument 'input' (position 1) must be Tensor, not NoneType
The 'video_sampling.py' code is shown in the following figure:
After testing, I found that the 'img' object is none, which should be a bug. I hope the author can solve it, thank you.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
streamlit run scripts/demo/video_sampling.py --server.port 8861 The error is as follows:
/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin
Initialized embedder #0: FrozenOpenCLIPImagePredictionEmbedder with 683800065 params. Trainable: False
Initialized embedder #1: ConcatTimestepEmbedderND with 0 params. Trainable: False
Initialized embedder #2: ConcatTimestepEmbedderND with 0 params. Trainable: False
Initialized embedder #3: VideoPredictionEmbedderWithEncoder with 83653863 params. Trainable: False
Initialized embedder #4: ConcatTimestepEmbedderND with 0 params. Trainable: False
Loading model from checkpoints/svd_image_decoder.safetensors
imgimgimgimgimgimgimgimgimgimgimg None
2023-12-06 17:01:10.503 Uncaught app exception
Traceback (most recent call last):
File "/root/anaconda3/envs/pt2/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.dict)
File "/root/generative-models/scripts/demo/video_sampling.py", line 144, in
value_dict["cond_frames"] = img + cond_aug * torch.randn_like(img)
TypeError: randn_like(): argument 'input' (position 1) must be Tensor, not NoneType
The 'video_sampling.py' code is shown in the following figure:
After testing, I found that the 'img' object is none, which should be a bug. I hope the author can solve it, thank you.
The text was updated successfully, but these errors were encountered: