Teng Hu, Jiangning Zhang, Ran Yi, Yating Wang, Hongrui Huang, Jieyu Weng, Yabiao Wang, and Lizhuang Ma
- **Release the one-shot camera-motion transfer code
- **Expected to release the few-shot camera-motion transfer code before 2024.10.20
Result of motion transfer without mask from a sudden zoom in video:
Result of motion transfer with a zoom-in video and its mask:
Ubuntu
python 3.9
cuda==11.8
gcc==7.5.0
cd AnimateDiff
pip install -r requirements.txt
We use AnimateDiff v2 in our model. Feel free to try other versions of AnimateDiff. Moreover, our model also works in other video generation model that contains temporal attention module. (e.g., Stable video diffusion and DynamiCrafter).
-
Download checkpoints for AnimateDiff v2
mm_sd_v15_v2.ckpt
(Google Drive / HuggingFace / CivitAI) and put it inmodels/Motion_Module/
. -
Download Realistic Vision V2.0 and put it in
models/DreamBooth_LoRA/
.
Prepare your reference video.
Edit configs\prompts\v2\v2-1-RealisticVision.yaml
to make sure video_name
is the file path to your reference video.
If mask is wanted, set use_mask=True
and make sure mask_save_dir
is the file path to your mask.
MotionMaster can transfer the motion from the given reference video by substituting the temporal attention map:
python scripts/motionconvert.py --config configs/prompts/v2/v2-0-RealisticVision.yaml
Edit video_name
in configs\prompts\v2\v2-0-RealisticVision.yaml
. The generated samples can be found in samples/
folder.
For one-shot camera motion disentanglement, you should prepare a reference video and the corresponding mask (suggest to use SAM) by
editing video_name
and mask_save_dir
in configs\prompts\v2\v2-1-RealisticVision.yaml
. Then run:
python scripts/motionconvert.py --config configs/prompts/v2/v2-1-RealisticVision.yaml
The generated samples can be found in samples/
folder.
Coming Soom.
If you find this code helpful for your research, please cite:
@misc{hu2024motionmaster,
title={MotionMaster: Training-free Camera Motion Transfer For Video Generation},
author={Teng Hu and Jiangning Zhang and Ran Yi and Yating Wang and Hongrui Huang and Jieyu Weng and Yabiao Wang and Lizhuang Ma},
year={2024},
eprint={2404.15789},
archivePrefix={arXiv},
primaryClass={cs.CV}
}