Forward-Backward Decoding Sequence for Regularizing End-to-End TTS | |
Zheng, Yibin2,3; Tao, Jianhua1; Wen, Zhengqi2; Yi, Jiangyan2 | |
刊名 | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING |
2019-12-01 | |
卷号 | 27期号:12页码:2067-2079 |
关键词 | Decoding Training Speech processing Linguistics Acoustics Speech recognition Forward-backward regularization encoder-decoder with attention end-to-end joint-training TTS |
ISSN号 | 2329-9290 |
DOI | 10.1109/TASLP.2019.2935807 |
通讯作者 | Tao, Jianhua(jhtao@nlpr.ia.ac.cn) |
英文摘要 | Neural end-to-end TTS such as Tacotron like network can generate very high-quality synthesized speech, and even close to human recording for similar domain text. However, it performs unsatisfactory when scaling it to some challenging test sets. One concern is that the encoder-decoder with attention-based network adopts autoregressive generative sequence model with the limitation of "exposure bias": errors made early could be quickly amplified, harming subsequent sequence generation. To address this issue, we propose two novel methods, which aim at predicting future by improving the agreement between forward and backward decoding sequence. The first one (denoted as MRBA) is achieved by adding divergence regularization terms to model training objective to maximize the agreement between two directional models, namely L2R (which generates targets from left-to-right) and R2L (which generates targets from right-to-left). While the second one (denoted as BDR) operates on decoder-level and exploits the future information during decoding. By introducing regularization term into the training objective of forward-backward decoders, the forward-decoder's hidden states are forced to be close to the backward-decoder's. Thus, the hidden representations of a unidirectional decoder are encouraged to embed some useful information about the future. Moreover, in order to make forward and backward decoding to improve each other in an interactive process, a joint training method is designed. Experimental results on both English and Mandarin dataset show that our proposed methods especially the second one (BDR), lead to a significantly improvement on both robustness and overall naturalness, as achieving obvious preference advantages in a challenging test, and achieving state-of-the-art performance (outperforming baseline "the revised version of Tacotron2" with a gap of 0.13 and 0.12 for English and Mandarin in MOS, respectively) on a general test. |
资助项目 | National Natural Science Foundation of China (NSFC)[61425017] ; National Natural Science Foundation of China (NSFC)[61773379] ; National Natural Science Foundation of China (NSFC)[61603390] ; National Natural Science Foundation of China (NSFC)[61771472] ; National Key Research& Development Plan of China[2017YFC0820602] ; State Key Program of the National Natural Science Foundation of China (NSFC)[61831022] ; Inria-CAS Joint Research Project[173211KYSB20170061] |
WOS研究方向 | Acoustics ; Engineering |
语种 | 英语 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
WOS记录号 | WOS:000492182400001 |
资助机构 | National Natural Science Foundation of China (NSFC) ; National Key Research& Development Plan of China ; State Key Program of the National Natural Science Foundation of China (NSFC) ; Inria-CAS Joint Research Project |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/28882] |
专题 | 模式识别国家重点实验室_智能交互 |
通讯作者 | Tao, Jianhua |
作者单位 | 1.Chinese Acad Sci, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 3.Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Zheng, Yibin,Tao, Jianhua,Wen, Zhengqi,et al. Forward-Backward Decoding Sequence for Regularizing End-to-End TTS[J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,2019,27(12):2067-2079. |
APA | Zheng, Yibin,Tao, Jianhua,Wen, Zhengqi,&Yi, Jiangyan.(2019).Forward-Backward Decoding Sequence for Regularizing End-to-End TTS.IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,27(12),2067-2079. |
MLA | Zheng, Yibin,et al."Forward-Backward Decoding Sequence for Regularizing End-to-End TTS".IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING 27.12(2019):2067-2079. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论