Self-Attention Transducers for End-to-End Speech Recognition
Zhengkun Tian1,2; Jiangyan Yi2; Jianhua Tao1,2,3; Ye Bai1,2; Zhengqi Wen2
2019-09
会议日期September 15–19, 2019
会议地点Graz, Austria
英文摘要

Recurrent neural network transducers (RNN-T) have been successfully
applied in end-to-end speech recognition. However, the recurrent structure makes it difficult for parallelization. In this paper, we propose a self-attention transducer (SA-T) for speech recognition. RNNs are replaced with self-attention blocks, which are powerful to model long-term dependencies inside sequences and able to be efficiently parallelized. Furthermore, a path-aware regularization is proposed to assist SA-T to learn alignments and improve the performance. Additionally, a chunk-flow mechanism is utilized to achieve online decoding. All experiments are conducted on a Mandarin Chinese dataset AISHELL-1. The results demonstrate that our proposed approach
achieves a 21.3% relative reduction in character error rate compared with the baseline RNN-T. In addition, the SA-T with chunk-flow mechanism can perform online decoding with only a little degradation of the performance.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/48608]  
专题模式识别国家重点实验室_智能交互
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.National Laboratory of Pattern Recognition, Institute of Automation, CASIA
3.CAS Center for Excellence in Brain Science and Intelligence Technology
推荐引用方式
GB/T 7714
Zhengkun Tian,Jiangyan Yi,Jianhua Tao,et al. Self-Attention Transducers for End-to-End Speech Recognition[C]. 见:. Graz, Austria. September 15–19, 2019.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace