Skip to content
/ MOFT Public

[Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller

Notifications You must be signed in to change notification settings

xizaoqu/MOFT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Video Diffusion Models are Training-free Motion Interpreter and Controller

Zeqi Xiao  Yifan Zhou  Shuai Yang  Xingang Pan 

Installation

Install the environments by

conda create moft python==3.8
conda activate moft
pip install -r requirements.txt

Downloads checkpoints from Animatediff, LoRA, and SD-1.5. Put them into the following structures:

models/
├── DreamBooth_LoRA
│   ├── realisticVisionV20_v20.safetensors
├── Motion_Module
│   ├── mm_sd_v15_v2.ckpt
├── stable-diffusion-v1-5

Run process.ipynb

🔗 Citation

If you find our work helpful, please cite:

@article{xiao2024video,
  title={Video Diffusion Models are Training-free Motion Interpreter and Controller},
  author={Xiao, Zeqi and Zhou, Yifan and Yang, Shuai and Pan, Xingang},
  journal={arXiv preprint arXiv:2405.14864},
  year={2024}
}

About

[Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published