NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition
Sheng, Fenfen1,2; Chen, Zhineng1; Xu, Bo1
2019-09
会议日期2019-9-20 ~ 2019-9-25
会议地点Sydney, Australia
英文摘要

Scene text recognition has attracted a great many researches due to its importance to various applications. Existing methods mainly adopt recurrence or convolution based networks. Though have obtained good performance, these methods still suffer from two limitations: slow training speed due to the internal recurrence of RNNs, and high complexity due to stacked convolutional layers for long-term feature extraction. This paper, for the first time, proposes a no-recurrence sequence-to-sequence text recognizer, named NRTR, that dispenses with recurrences and convolutions entirely. NRTR follows the encoder-decoder paradigm, where the encoder uses stacked self-attention to extract image features, and the decoder applies stacked self-attention to recognize texts based on encoder output. NRTR relies solely on self-attention mechanism thus could be trained with more parallelization and less complexity. Considering scene image has large variation in text and background, we further design a modality-transform block to effectively transform 2D input images to 1D sequences, combined with the encoder to extract more discriminative features. NRTR achieves state-of-the-art or highly competitive performance on both regular and irregular benchmarks, while requires only a small fraction of training time compared to the best model from the literature (at least 8 times faster).

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/39261]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Sheng, Fenfen,Chen, Zhineng,Xu, Bo. NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition[C]. 见:. Sydney, Australia. 2019-9-20 ~ 2019-9-25.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace