-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
200 lines (196 loc) · 11.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
<!DOCTYPE html>
<html lang='en'>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</script>
<title>USING MULTIPLE REFERENCE AUDIOS AND STYLE EMBEDDING CONSTRAINTS FOR SPEECH SYNTHESIS</title>
<link rel="shortcut icon" href="images/favicon.ico">
<link rel="stylesheet" type="text/css" href="style.css" media="screen">
<script type="text/javascript" async
src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.4/MathJax.js?config=TeX-MML-AM_CHTML" async>
</script>
</head>
<body>
<!--<div class="center">-->
<!--This page uses a template from the <a href="http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/">project page</a> of Satoshi Iizuka et al, <i>"Globally and Locally Consistent Image Completion"</i>.-->
<!--</div>-->
<div class="content">
<!--<img src="images/hpcnt_logo.png" width="200" border="0" class="center">-->
<h1>USING MULTIPLE REFERENCE AUDIOS AND STYLE EMBEDDING CONSTRAINTS FOR SPEECH SYNTHESIS</h1>
<p id="authors">
Cheng Gong
Longbiao Wang <sup>*</sup>
Zhenhua Ling
Ju Zhang
Jianwu Dang
<br><br>
<!--<a href="http://hyperconnect.com/", target="_blank">Hyperconnect</a><br>Seoul, Republic of Korea-->
<br>Submitted to ICASSP2022 <a href="https://2022.ieeeicassp.org/", target="_blank">ICASSP 2022</a>
</p>
<div class="center">
<a class="button" href="https://arxiv.org/abs/2005.08484">Paper (arXiv)</a>
</div>
<div class="footnote">
*Corresponding author.
</div>
</div>
<div class="content">
<h2>Abstract</h2>
<p>
The end-to-end speech synthesis model can directly take an utterance as reference audio, and generate speech from the text with prosody and speaker characteristics similar to the reference audio. However, an appropriate acoustic embedding must be manually selected during inference. Due to the fact that only the matched text and speech are used in the training process, using unmatched text and speech for inference would cause the model to synthesize speech
with low content quality. In this study, we propose to mitigate these two problems by using multiple reference audios and style embedding constraints rather than using only the target audio. Multiple reference audios are automatically selected using the sentence similarity determined by Bidirectional Encoder Representations from Transformers (BERT). In addition, we use “target” style embedding from a pre-trained encoder as a constraint by considering the mutual information between the predicted and “target” style embedding. The experimental results show that the proposed model can improve the speech naturalness and content quality with multiple reference audios and can also outperform the baseline model in ABX preference tests of style similarity.
</p>
</div>
<div class="content">
<h2>Text-to-Speech Samples for Unseen Speakers During Training</h2>
<p>
These examples are sampled from the evaluation set for Table 1 in the paper.
Each column corresponds to a single speaker, and each row corresponds to different models.
Our proposed model at last row, <b>Attentron(8-8)</b>, showed the best result from evalutation including MOS in terms of the speaker similarity and speech quality.
</p>
<table>
<thead>
<tr>
<th></th><th>VCTK p304</th><th>VCTK p311</th><th>VCTK p316</th><th>VCTK p305</th><th>VCTK p306</th><th>VCTK p312</th>
</tr>
</thead>
<tbody>
<tr>
<td><b>Text</b></td>
<td>It's totally double standards.</td>
<td>His third goal was superb.</td>
<td>He declined to give further details.</td>
<td>We had a reunion last week.</td>
<td>You'd think there was a match on today.</td>
<td>I think it must be the uniforms.</td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td>Ground-truth</td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_groundtruth.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_groundtruth.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_groundtruth.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_groundtruth.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_groundtruth.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_groundtruth.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td>LDE(1)</td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_LDE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_LDE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_LDE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_LDE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_LDE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_LDE1.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td>LDE(8)</td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_LDE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_LDE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_LDE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_LDE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_LDE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_LDE8.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td>GMVAE(1)</td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_GMVAE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_GMVAE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_GMVAE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_GMVAE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_GMVAE1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_GMVAE1.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td>GMVAE(8)</td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_GMVAE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_GMVAE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_GMVAE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_GMVAE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_GMVAE8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_GMVAE8.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td>Attentron(1-1)</td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_proposed1-1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_proposed1-1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_proposed1-1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_proposed1-1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_proposed1-1.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_proposed1-1.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
<tbody>
<tr>
</tr>
<tr>
<td><b>Attentron(8-8)</b></td>
<td><audio controls=""><source src="demos/p304_119_WaveRNN_p304_proposed8-8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p311_060_WaveRNN_p311_proposed8-8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p316_062_WaveRNN_p316_proposed8-8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p305_033_WaveRNN_p305_proposed8-8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p306_077_WaveRNN_p306_proposed8-8.wav" type="audio/wav"></audio></td>
<td><audio controls=""><source src="demos/p312_043_WaveRNN_p312_proposed8-8.wav" type="audio/wav"></audio></td>
</tr>
</tbody>
</table>
</div>
<div class="content">
<h2> Methods </h2>
<div class="center">
<img src="images/architecture.png", width="600">
<img src="images/attention_module.png" width="600">
</div>
<br> <br>
In this paper, we propose <i>Attentron</i>, a novel architecture of few-shot TTS for unseen speakers. We adopted two main components for our model:
<ul>
<li><b>Fine-grained encoder</b>, which includes an attention mechaninsm to extract detailed styles from multiple reference audios.
It allows the model to take any number of the reference audios, and the quality improves with more reference audios.
The fine-grained encoder generates a variable-length embedding which maintains temporal information.
</li>
<li><b>Coarse-grained encoder</b>, which extracts overall information of speech and helps to stabilize the outputs.
</li>
</ul>
</div>
<div class="content">
<h3>Citation</h3>
To refer our work, please cite our paper as follows:
<code>
@misc{choi2020attentron,<br>
title={Attentron: Few-Shot Text-to-Speech Utilizing Attention-Based Variable-Length Embedding},<br>
author={Seungwoo Choi and Seungju Han and Dongyoung Kim and Sungjoo Ha},<br>
year={2020},<br>
eprint={2005.08484},<br>
archivePrefix={arXiv},<br>
primaryClass={eess.AS}<br>
}
</code>
</div>
<div class="center">
This page uses a template from the <a href="https://hyperconnect.github.io/MarioNETte/">project page</a> of Ha et al, <i>"MarioNETte: Few-shot Face Reenactment Preserving Identity of Unseen Targets"</i>.
</div>
</body>
</html>