Knowledge Transfer from Pre-Trained Language Models to CIF-Based Speech Recognizers via Hierarchical Distillation
Minglun Han2,3; Feilong Chen1,3; Jing Shi3; Shuang Xu3; Bo Xu1,2,3
2023-05
会议日期2023-8-20
会议地点Dublin, Ireland
英文摘要

Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs cross-modal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/52064]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
通讯作者Jing Shi
作者单位1.School of Future Technology, University of Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Minglun Han,Feilong Chen,Jing Shi,et al. Knowledge Transfer from Pre-Trained Language Models to CIF-Based Speech Recognizers via Hierarchical Distillation[C]. 见:. Dublin, Ireland. 2023-8-20.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace