- This repository is only used for academic research, any commercial use is prohibited.
- The copyright of digital human presented in our demo is reserved by SMT.
- Thanks to Shanghai Media Tech(SMT) for providing the data set and rendering service.
- We use pre-trained HuBERT model from this repository.
- We use implementation of soft-DTW loss from this repository.
- We use implementation of Transformer from this repository
Thanks to the authors of above repositories.
contrast_TTSaug_record.mp4
contrast_TTSaug_Obama.mp4
contrast_TTSaug_News_f.mp4
contrast_TTSaug_News_m.mp4
contrast_T2A_record.mp4
contrast_T2A_Obama.mp4
contrast_T2A_News_f.mp4
contrast_T2A_News_m.mp4
contrast_features.mp4
contrast_loss.mp4
contrast_resource.mp4
The pre-trained HuBERT model is obtained from this repository.
Please download Chinese HuBERT model and put it on directory ./data/pretrained_models/ by executing the following command:
wget -P ./data/pretrained_models/ https://huggingface.co/TencentGameMate/chinese-hubert-large/resolve/main/chinese-hubert-large-fairseq-ckpt.pt
git clone [email protected]:facebookresearch/fairseq.git
cd fairseq
git checkout acd9a53
pip install --editable ./
cd ..
cp hubert.py ./fairseq/fairseq/models/hubert/
python utils/generate_hubert.py --input_dir ./data/wavs/[speaker name] --output_dir ./data/wav_features/[speaker name]
python utils/generate_mfcc.py --input_dir ./data/wavs/[speaker name] --output_dir ./data/wav_features/[speaker name]
run bash train.sh
to train
- arch: chinese_hubert_large | mfcc | pgg
- feature_combine: True if you want to use weighted sum of hubert feature
- output_path: "result" if you want to generate output of test set | [other name] if you want to generate other data
- test_input_path: you should explicitly assign path of test_input_path if output_path != "result", test_input_path is the dir of csv files
- test_epoch: do not need to explicitly assign, will find the model with best
- root_dir: dir of dataset root
- feature_dir: hubert_large | mfcc | ppg
- train_speaker_list: assign several speaker names for training
- train_json: used to change data resource, path of json file which includes list of audio name in training set
- freq: 50 if feature is chinese_hubert_large or ppg , 100 if feature is mfcc
- input_dim: 39 for mfcc, 128 for ppg
run bash validate.sh
to pick the best model by validating on validation set of certain speaker, change --val_speaker
to decide speaker for validation.
run bash test.sh
to test
@article{chen2023improving,
title={Improving Few-Shot Learning for Talking Face System with TTS Data Augmentation},
author={Chen, Qi and Ma, Ziyang and Liu, Tao and Tan, Xu and Lu, Qu and Yu, Kai and Chen, Xie},
booktitle={ICASSP 2022-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
year={2023}
}