Skip to content

Latest commit

 

History

History
102 lines (72 loc) · 4.11 KB

README.md

File metadata and controls

102 lines (72 loc) · 4.11 KB

Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement

arXiv hf WebPage

We present our reproduction of Vevo, a versatile zero-shot voice imitation framework with controllable timbre and style. We invite you to explore the audio samples to experience Vevo's capabilities firsthand.



We have included the following pre-trained Vevo models at Amphion:

  • Vevo-Timbre: It can conduct style-preserved voice conversion.
  • Vevo-Style: It can conduct style conversion, such as accent conversion and emotion conversion.
  • Vevo-Voice: It can conduct style-converted voice conversion.
  • Vevo-TTS: It can conduct style and timbre controllable TTS.

Besides, we also release the content tokenizer and content-style tokenizer proposed by Vevo. Notably, all these pre-trained models are trained on Emilia, containing 101k hours of speech data among six languages (English, Chinese, German, French, Japanese, and Korean).

Quickstart

To run this model, you need to follow the steps below:

  1. Clone the repository and install the environment.
  2. Run the inference script.

Clone and Environment Setup

1. Clone the repository

git clone https://github.com/open-mmlab/Amphion.git
cd Amphion

2. Install the environment

Before start installing, making sure you are under the Amphion directory. If not, use cd to enter.

Since we use phonemizer to convert text to phoneme, you need to install espeak-ng first. More details can be found here. Choose the correct installation command according to your operating system:

# For Debian-like distribution (e.g. Ubuntu, Mint, etc.)
sudo apt-get install espeak-ng
# For RedHat-like distribution (e.g. CentOS, Fedora, etc.) 
sudo yum install espeak-ng

# For Windows
# Please visit https://github.com/espeak-ng/espeak-ng/releases to download .msi installer

Now, we are going to install the environment. It is recommended to use conda to configure:

conda create -n vevo python=3.10
conda activate vevo

pip install -r models/vc/vevo/requirements.txt

Inference Script

# Vevo-Timbre
python -m models.vc.vevo.infer_vevotimbre

# Vevo-Style
python -m models.vc.vevo.infer_vevostyle

# Vevo-Voice
python -m models.vc.vevo.infer_vevovoice

# Vevo-TTS
python -m models.vc.vevo.infer_vevotts

Running this will automatically download the pretrained model from HuggingFace and start the inference process. The result audio is by default saved in models/vc/vevo/wav/output*.wav, you can change this in the scripts models/vc/vevo/infer_vevo*.py

Citations

If you use Vevo in your research, please cite the following papers:

@inproceedings{vevo,
  author       = {Xueyao Zhang and Xiaohui Zhang and Kainan Peng and Zhenyu Tang and Vimal Manohar and Yingru Liu and Jeff Hwang and Dangna Li and Yuhao Wang and Julian Chan and Yuan Huang and Zhizheng Wu and Mingbo Ma},
  title        = {Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement},
  booktitle    = {{ICLR}},
  publisher    = {OpenReview.net},
  year         = {2025}
}

@inproceedings{amphion,
    author={Xueyao Zhang and Liumeng Xue and Yicheng Gu and Yuancheng Wang and Jiaqi Li and Haorui He and Chaoren Wang and Ting Song and Xi Chen and Zihao Fang and Haopeng Chen and Junan Zhang and Tze Ying Tang and Lexiao Zou and Mingxuan Wang and Jun Han and Kai Chen and Haizhou Li and Zhizheng Wu},
    title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
    booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024},
    year={2024}
}