Skip to content

Latest commit

 

History

History
59 lines (46 loc) · 1.49 KB

README.md

File metadata and controls

59 lines (46 loc) · 1.49 KB

Video Diffusion Models are Training-free Motion Interpreter and Controller

Zeqi Xiao  Yifan Zhou  Shuai Yang  Xingang Pan 

Installation

Install the environments by

conda create moft python==3.8
conda activate moft
pip install -r requirements.txt

Downloads checkpoints from Animatediff, LoRA, and SD-1.5. Put them into the following structures:

models/
├── DreamBooth_LoRA
│   ├── realisticVisionV20_v20.safetensors
├── Motion_Module
│   ├── mm_sd_v15_v2.ckpt
├── stable-diffusion-v1-5

Run process.ipynb

🔗 Citation

If you find our work helpful, please cite:

@article{xiao2024video,
  title={Video Diffusion Models are Training-free Motion Interpreter and Controller},
  author={Xiao, Zeqi and Zhou, Yifan and Yang, Shuai and Pan, Xingang},
  journal={arXiv preprint arXiv:2405.14864},
  year={2024}
}