Skip to content

Style-Controllable Zero-Shot Text to Speech Synthesizer based on VALL-E

License

Notifications You must be signed in to change notification settings

0913ktg/SC_VALL-E

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SC_VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer based on VALL-E

This project was implemented using the unofficial PyTorch implementation by "enhuiz".

You can check the audio demo of SC VALL-E at the following link.

You can read the paper on SC VALL-E at the following link.

Getting Started

Setting up the development environment for this project can be challenging due to version conflicts with various libraries.

Therefore, we managed the development environment of this project using a Docker container.

The Docker image, which can be used to create the Docker container, can be downloaded from Docker Hub.

Dataset

The data used for model training can be downloaded from the following link.

Installing

You can clone this GitHub repository and use it.

git clone https://github.com/0913ktg/SC_VALL-E

Train

  1. Put your data into a folder, e.g. data/your_data. Audio files should be named with the suffix .wav and text files with .normalized.txt.

  2. Quantize the data:

python -m vall_e.emb.qnt data/your_data
  1. Generate phonemes based on the text:
python -m vall_e.emb.g2p data/your_data
  1. Customize your configuration by creating config/your_data/ar.yml and config/your_data/nar.yml. Refer to the example configs in config/korean and vall_e/config.py for details. You may choose different model presets, check vall_e/vall_e/__init__.py.

  2. Train the AR or NAR model using the following scripts:

python -m vall_e.train yaml=config/your_data/ar_or_nar.yml

You may quit your training any time by just typing quit in your CLI. The latest checkpoint will be automatically saved.

Export

Both trained models need to be exported to a certain path. To export either of them, run:

python -m vall_e.export zoo/ar_or_nar.pt yaml=config/your_data/ar_or_nar.yml

This will export the latest checkpoint.

Synthesis

python -m vall_e <text> <ref_path> <out_path> --ar-ckpt zoo/ar.pt --nar-ckpt zoo/nar.pt

Notice

  • EnCodec is licensed under CC-BY-NC 4.0. If you use the code to generate audio quantization or perform decoding, it is important to adhere to the terms of their license.

Citation

@article{kim2023sc,
  title={SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer},
  author={Kim, Daegyeom and Hong, Seongho and Choi, Yong-Hoon},
  journal={arXiv preprint arXiv:2307.10550},
  year={2023}
}
@article{defossez2022highfi,
  title={High Fidelity Neural Audio Compression},
  author={Défossez, Alexandre and Copet, Jade and Synnaeve, Gabriel and Adi, Yossi},
  journal={arXiv preprint arXiv:2210.13438},
  year={2022}
}

Acknowledgments

  • Hat tip to anyone whose code was used

There were many difficulties in processing the data downloaded from AI Hub for training. If you have preprocessed data available, it is recommended to use it as much as possible. However, if you plan to process data downloaded from the internet, be aware that data preprocessing can be time-consuming.

Having a significant difference in the length of utterances in the training data can lead to inefficient training. It is recommended to use audio files with utterance lengths between 3 to 7 seconds.

  • Inspiration

I would like to express my gratitude to enhuiz for providing well-implemented PyTorch source code.

I also extend my thanks to Moon-sung-woo for providing valuable assistance in getting started with Expressive TTS.

About

Style-Controllable Zero-Shot Text to Speech Synthesizer based on VALL-E

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages