The official repo for "MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing"
[07/12/2024] All data of other subjects are released here. Thanks for ZiXuan providing the cloud storage.
[07/8/2024] The data and pretrained models of Subject 306 have been released here!
[01/8/2024] The Codes has been released!
[06/5/2024] Add more results to the project page.
[28/4/2024] The official repo is initialized.
- Release the project page
- Add more results to the project page
- Release the codes
- Release the data and Subject 306's pretrained model.
- Upload the data of Subject 218, 304.
- Upload all data of other subjects.
- Update the codes to the latest version.
Here, we provide commands that are needed to build the conda environment:
# 1. create a new conda env & activate
conda create -n mega python=3.9
conda activate mega
# 2. run our scripts to install requirements
./create_env.sh
We use the same 9 subjects from NeRSemble dataset as GaussianAvatars in our experiments. Based on their provided data, we additionally generate depth maps and face parsing results. All pre-processed data and models that are used to reproduce the results of Subject 306 are provided here.
Whether you want to train or test our methods, you need to download the data and decompress it into somewhere, e.g., /path/to/nersemble
For more subjects' data, please download from here.
To train a full MeGA avatar (taking Subject 306 as an example), you need to take two steps.
First, train a canonical hair model using
# Before execute the following commands, you need to change every path ('/path/to/...') to your specific path.
# Including files: ['./scripts/train_hair.sh', './configs/nersemble/306/hair.yaml']
cd /path/to/MeGA
bash ./scripts/train_hair.sh
After that, your hair model will be saved in your specified directory (i.e., $WORKSPACE/$VERSION/checkpoint_reset.pth).
Next, train the full avatar model using
# Also changing every path ('/path/to/...') to your specific path.
# Including files: ['./scripts/train_full.sh', './configs/nersemble/306/full.yaml']
cd /path/to/MeGA
bash ./scripts/train_full.sh
If you want to only render images in the test dataset and valid dataset or compute metrics, you can run
cd /path/to/MeGA
bash ./scripts/metrics.sh
The script will render images first and then compute metrics automaticly.
As mentioned in our paper, MeGA supports some human head editing. All related codes are in ./funny_demo.
To perform hair alteration (e.g., alternate Subject 218's hair to 306's hair), you can run
cd /path/to/MeGA
bash ./scripts/alter_hair.sh
We have provided some 2d painting images in the preprocessed data (/path/to/nersemble/preprocess/306/306_EMO-1_v16_DS2-0.5x_lmkSTAR_teethV3_SMOOTH_offsetS_whiteBg_maskBelowLine/images/00000_08_*.png).
You can also produce your own 2d painting images and put them to the 3d head avatar with our scripts.
cd /path/to/MeGA
bash ./scripts/paint.sh
This process will take some time (several minutes) to optimize.
We take the painted avatar above as an example. The painted avatar will be saved in somewhere like '/path/to/checkpoints/MeGA/0801/train_306_b16_MeGA/duola', and you can further render sequences using painted avatars:
cd /path/to/MeGA
bash ./scripts/render.sh
The results will be saved in somewhere like '/path/to/checkpoints/MeGA/0801/train_306_b16_MeGA/duola/exp3_eval'. If you want a video result, please execute './scripts/img2video.sh' (using ffmpeg).
cd /path/to/MeGA
bash ./scripts/img2video.sh /path/to/checkpoints/MeGA/0801/train_306_b16_MeGA/duola/exp3_eval/renders
The video can be generated in '/path/to/checkpoints/MeGA/0801/train_306_b16_MeGA/duola/exp3_eval/output.mp4'.
If you find this code useful for your research, please consider citing:
@article{wang2024mega,
title={MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing},
author={Wang, Cong and Kang, Di and Sun, He-Yi and Qian, Shen-Han and Wang, Zi-Xuan and Bao, Linchao and Zhang, Song-Hai},
journal={arXiv preprint arXiv:2404.19026},
year={2024}
}