IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech
Existing autoregressive large-scale text-to-speech (TTS) models have advantages in speech naturalness, but their token-by-token generation mechanism makes it difficult to precisely control the duration of synthesized speech. This becomes a significant limitation in applications requiring strict audio-visual synchronization, such as video dubbing. This paper introduces IndexTTS2, which proposes a novel, general, and autoregressive model-friendly method for speech duration control. The method supports two generation modes: one explicitly specifies the number of generated tokens to precisely control speech duration; the other freely generates speech in an autoregressive manner without specifying the number of tokens, while faithfully reproducing the prosodic features of the input prompt. Furthermore, IndexTTS2 achieves disentanglement between emotional expression and speaker identity, enabling independent control over timbre and emotion. In the zero-shot setting, the model can accurately reconstruct the target timbre (from the timbre prompt) while perfectly reproducing the specified emotional tone (from the style prompt). To enhance speech clarity in highly emotional expressions, we incorporate GPT latent representations and design a novel three-stage training paradigm to improve the stability of the generated speech. Additionally, to lower the barrier for emotional control, we designed a soft instruction mechanism based on text descriptions by fine-tuning Qwen3, effectively guiding the generation of speech with the desired emotional orientation. Finally, experimental results on multiple datasets show that IndexTTS2 outperforms state-of-the-art zero-shot TTS models in terms of word error rate, speaker similarity, and emotional fidelity. Audio samples are available at: IndexTTS2 demo page
Tips: Please contact authors for more detailed information. For commercial cooperation, please contact [email protected]
QQ Group:553460296(No.1) 663272642(No.4)
Discord:https://discord.gg/uT32E7KDmy
Emal:[email protected]
欢迎大家来交流讨论!
2025/09/08
🔥🔥🔥 We release the IndexTTS-2- The first autoregressive TTS model with precise synthesis duration control, supporting both controllable and uncontrollable modes. This functionality is not yet enabled in this release.
- The model achieves highly expressive emotional speech synthesis, with emotion-controllable capabilities enabled through multiple input modalities.
2025/05/14
🔥🔥 We release the IndexTTS-1.5, Significantly improve the model's stability and its performance in the English language.2025/03/25
🔥 We release IndexTTS-1.0 model parameters and inference code.2025/02/12
🔥 We submitted our paper on arXiv, and released our demos and test sets.
The overview of IndexTTS2 is shown as follows.

The key contributions of indextts2 are summarized as follows:
- We propose a duration adaptation scheme for autoregressive TTS models. IndexTTS2 is the first autoregressive zero-shot TTS model to combine precise duration control with natural duration generation, and the method is scalable for any autoregressive large-scale TTS model.
- The emotional and speaker-related features are decoupled from the prompts, and a feature fusion strategy is designed to maintain semantic fluency and pronunciation clarity during emotionally rich expressions. Furthermore, a tool was developed for emotion control, utilising natural language descriptions for the benefit of users.
- To address the lack of highly expressive speech data, we propose an effective training strategy, significantly enhancing the emotional expressiveness of zeroshot TTS to State-of-the-Art (SOTA) level.
- We will publicly release the code and pre-trained weights to facilitate future research and practical applications.
HuggingFace | ModelScope |
---|---|
😁 IndexTTS-2 | IndexTTS-2 |
IndexTTS-1.5 | IndexTTS-1.5 |
IndexTTS | IndexTTS |
- Download this repository:
git clone https://github.com/index-tts/index-tts.git
git lfs pull
- Install dependencies:
We use
uv
to initialize and manage the project’s dependency environment.
uv sync
- Download models:
Download by huggingface-cli
:
huggingface-cli download IndexTeam/IndexTTS-2 \
bpe.model config.yaml feat1.pt feat2.pt gpt.pth qwen0.6bemo4-merge s2mel.pth wav2vec2bert_stats.pt
--local-dir checkpoints
Or by wget
:
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/bpe.model -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/config.yaml -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/feat1.pt -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/feat2.pt -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/gpt.pth -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/qwen0.6bemo4-merge -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/s2mel.pth -P checkpoints
wget https://huggingface.co/IndexTeam/IndexTTS-2/resolve/main/wav2vec2bert_stats.pt -P checkpoints
Recommended for China users. 如果下载速度慢,可以使用镜像:
export HF_ENDPOINT="https://hf-mirror.com"
Or download by modelscope
modelscope download --model IndexTeam/IndexTTS-2 --local_dir checkpoints
Examples of running scripts with uv
.
PYTHONPATH=$PYTHONPATH:. uv run python indextts/infer_v2.py
- Synthesize speech with a single reference audio only:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", is_fp16=False, use_cuda_kernel=False)
text = "Translate for me,what is a surprise!"
tts.infer(spk_audio_prompt='examples/voice_01.wav', text=text, output_path="gen.wav", verbose=True)
- Use additional emotional reference audio to condition speech synthesis:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", is_fp16=False, use_cuda_kernel=False)
text = "酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"
tts.infer(spk_audio_prompt='examples/voice_07.wav', text=text, output_path="gen.wav", emo_audio_prompt="examples/emo_sad.wav", verbose=True)
- When an emotional reference audio is specified, you can additionally set the
emo_alpha
parameter. Default value is1.0
:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", is_fp16=False, use_cuda_kernel=False)
text = "酒楼丧尽天良,开始借机竞拍房间,哎,一群蠢货。"
tts.infer(spk_audio_prompt='examples/voice_07.wav', text=text, output_path="gen.wav", emo_audio_prompt="examples/emo_sad.wav", emo_alpha=0.9, verbose=True)
- It’s also possible to omit the emotional reference audio and instead provide an 8-float list specifying the intensity of each base emotion (Happy | Angery | Sad | Fear | Hate | Low | Surprise | Neutral). You can additionally control the
use_random
parameter to decide whether to introduce stochasticity during inference; the default isFalse
, and setting it toTrue
increases randomness:
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", is_fp16=False, use_cuda_kernel=False)
text = "哇塞!这个爆率也太高了!欧皇附体了!"
tts.infer(spk_audio_prompt='examples/voice_10.wav', text=text, output_path="gen.wav", emo_vector=[0, 0, 0, 0, 0, 0, 0.45, 0], use_random=False, verbose=True)
- Use a text emotion description via
use_emo_text
to guide synthesis. Control randomness withuse_random
(default: False; True adds randomness):
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", is_fp16=False, use_cuda_kernel=False)
text = "快躲起来!是他要来了!他要来抓我们了!"
tts.infer(spk_audio_prompt='examples/voice_12.wav', text=text, output_path="gen.wav", use_emo_text=True, use_random=False, verbose=True)
- Without
emo_text
, infer emotion from the synthesis script; withemo_text
, infer from the provided text.
from indextts.infer_v2 import IndexTTS2
tts = IndexTTS2(cfg_path="checkpoints/config.yaml", model_dir="checkpoints", is_fp16=False, use_cuda_kernel=False)
text = "快躲起来!是他要来了!他要来抓我们了!"
emo_text = "你吓死我了!你是鬼吗?"
tts.infer(spk_audio_prompt='examples/voice_12.wav', text=text, output_path="gen.wav", use_emo_text=True, emo_text=emo_text, use_random=False, verbose=True)
from indextts.infer import IndexTTS
tts = IndexTTS(model_dir="checkpoints",cfg_path="checkpoints/config.yaml")
voice = "examples/voice_07.wav"
text = "大家好,我现在正在bilibili 体验 ai 科技,说实话,来之前我绝对想不到!AI技术已经发展到这样匪夷所思的地步了!比如说,现在正在说话的其实是B站为我现场复刻的数字分身,简直就是平行宇宙的另一个我了。如果大家也想体验更多深入的AIGC功能,可以访问 bilibili studio,相信我,你们也会吃惊的。"
tts.infer(voice, text, 'gen.wav')
For more information, see README_INDEXTTS_1_5, or visit the specific version at index-tts:v1.5.0
PYTHONPATH=$PYTHONPATH:. uv run webui.py
Open your browser and visit http://127.0.0.1:7860
to see the demo.
On Windows, you may encounter an error when installing pynini
:
ERROR: Failed building wheel for pynini
In this case, please install pynini
via conda
:
# after conda activate index-tts
conda install -c conda-forge pynini==2.1.5
pip install WeTextProcessing==1.0.3
pip install -e ".[webui]"
IndexTTS1: [Paper]; [Demo]; [ModelScope]; [HuggingFace]
🌟 If you find our work helpful, please leave us a star and cite our paper.
IndexTTS2
@article{zhou2025indextts2,
title={IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech},
author={Siyi Zhou, Yiquan Zhou, Yi He, Xun Zhou, Jinchao Wang, Wei Deng, Jingchen Shu},
journal={arXiv preprint arXiv:2506.21619},
year={2025}
}
IndexTTS
@article{deng2025indextts,
title={IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System},
author={Wei Deng, Siyi Zhou, Jingchen Shu, Jinchao Wang, Lu Wang},
journal={arXiv preprint arXiv:2502.05512},
year={2025},
doi={10.48550/arXiv.2502.05512},
url={https://arxiv.org/abs/2502.05512}
}