Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription
Su,Rongfeng; Liu,Xunying; Wang,Lan
2018
会议日期2018
会议地点HYDERABAD, INDIA
英文摘要Visual information can be incorporated into automatic speech recognition (ASR) systems to improve their robustness in adverse acoustic conditions. Conventional audio-visual speech recognition (AVSR) systems require highly specialized audio-visual (AV) data in both system training and evaluation. For many real-world speech recognition applications, only audio information is available. This presents a major challenge to a wider application of AVSR systems. In order to address this challenge, this paper proposes a semi-supervised visual feature learning approach for developing AVSR systems on a DARPA GALE Mandarin broadcast transcription task. Audio to visual feature inversion long short-term memory neural networks (LSTMs) were initially constructed using limited amounts of out of domain AV data. The acoustic features domain mismatch against the broadcast data was further reduced using multi-level domain adaptive deep networks. Visual features were then automatically generated from the broadcast speech audio and used in both AVSR system training and testing time. Experimental results suggest a CNN based AVSR system using the proposed semi-supervised cross-domain audio-to-visual feature generation technique outperformed the baseline audio only CNN ASR system by an average CER reduction of 6.8% relative. In particular, on the most difficult Phoenix TV subset, a CER reduction of 1.32% absolute (8.34% relative) was obtained.
内容类型会议论文
源URL[http://ir.siat.ac.cn:8080/handle/172644/13712]  
专题深圳先进技术研究院_集成所
推荐引用方式
GB/T 7714
Su,Rongfeng,Liu,Xunying,Wang,Lan. Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription[C]. 见:. HYDERABAD, INDIA. 2018.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace