Recurrent neural network transducers (RNN-T) have been successfully
applied in end-to-end speech recognition. However, the recurrent structure makes it difficult for parallelization. In this paper, we propose a self-attention transducer (SA-T) for speech recognition. RNNs are replaced with self-attention blocks, which are powerful to model long-term dependencies inside sequences and able to be efficiently parallelized. Furthermore, a path-aware regularization is proposed to assist SA-T to learn alignments and improve the performance. Additionally, a chunk-flow mechanism is utilized to achieve online decoding. All experiments are conducted on a Mandarin Chinese dataset AISHELL-1. The results demonstrate that our proposed approach
achieves a 21.3% relative reduction in character error rate compared with the baseline RNN-T. In addition, the SA-T with chunk-flow mechanism can perform online decoding with only a little degradation of the performance.
1.School of Artificial Intelligence, University of Chinese Academy of Sciences 2.National Laboratory of Pattern Recognition, Institute of Automation, CASIA 3.CAS Center for Excellence in Brain Science and Intelligence Technology
推荐引用方式 GB/T 7714
Zhengkun Tian,Jiangyan Yi,Jianhua Tao,et al. Self-Attention Transducers for End-to-End Speech Recognition[C]. 见:. Graz, Austria. September 15–19, 2019.
修改评论