CORC  > 自动化研究所  > 中国科学院自动化研究所
Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network
Dong, Meng1,2; Huang, Xuhui2; Xu, Bo2,3,4
刊名PLOS ONE
2018-11-29
卷号13期号:11页码:19
ISSN号1932-6203
DOI10.1371/journal.pone.0204596
通讯作者Huang, Xuhui(xuhui.huang@ia.ac.cn) ; Xu, Bo(xubo@ia.ac.cn)
英文摘要Speech recognition (SR) has been improved significantly by artificial neural networks (ANNs), but ANNs have the drawbacks of biologically implausibility and excessive power consumption because of the nonlocal transfer of real-valued errors and weights. While spiking neural networks (SNNs) have the potential to solve these drawbacks of ANNs due to their efficient spike communication and their natural way to utilize kinds of synaptic plasticity rules found in brain for weight modification. However, existing SNN models for SR either had bad performance, or were trained in biologically implausible ways. In this paper, we present a biologically inspired convolutional SNN model for SR. The network adopts the time-to-first-spike coding scheme for fast and efficient information processing. A biological learning rule, spike-timing-dependent plasticity (STDP), is used to adjust the synaptic weights of convolutional neurons to form receptive fields in an unsupervised way. In the convolutional structure, the strategy of local weight sharing is introduced and could lead to better feature extraction of speech signals than global weight sharing. We first evaluated the SNN model with a linear support vector machine (SVM) on the TIDIGITS dataset and it got the performance of 97.5%, comparable to the best results of ANNs. Deep analysis on network outputs showed that, not only are the output data more linearly separable, but they also have fewer dimensions and become sparse. To further confirm the validity of our model, we trained it on a more difficult recognition task based on the TIMIT dataset, and it got a high performance of 93.8%. Moreover, a linear spike-based classifier-tempotron-can also achieve high accuracies very close to that of SVM on both the two tasks. These demonstrate that an STDP-based convolutional SNN model equipped with local weight sharing and temporal coding is capable of solving the SR task accurately and efficiently.
资助项目Natural Science Foundation of China[11505283] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDBS01070000] ; Independent Deployment Project of CAS Center for Excellence in Brain Science and Intelligent Technology[CEBSIT2017-02] ; NVIDIA Corporation
WOS关键词STIMULUS LOCATION ; WORD RECOGNITION ; INFORMATION ; REPRESENTATIONS ; FEATURES ; NEURONS ; TIME ; CODE
WOS研究方向Science & Technology - Other Topics
语种英语
出版者PUBLIC LIBRARY SCIENCE
WOS记录号WOS:000451763800010
资助机构Natural Science Foundation of China ; Strategic Priority Research Program of the Chinese Academy of Sciences ; Independent Deployment Project of CAS Center for Excellence in Brain Science and Intelligent Technology ; NVIDIA Corporation
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/25705]  
专题中国科学院自动化研究所
通讯作者Huang, Xuhui; Xu, Bo
作者单位1.Harbin Univ Sci & Technol, Sch Automat, Harbin, Heilongjiang, Peoples R China
2.Chinese Acad Sci, Res Ctr Brain Inspired Intelligence, Inst Automat, Beijing, Peoples R China
3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
4.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Dong, Meng,Huang, Xuhui,Xu, Bo. Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network[J]. PLOS ONE,2018,13(11):19.
APA Dong, Meng,Huang, Xuhui,&Xu, Bo.(2018).Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network.PLOS ONE,13(11),19.
MLA Dong, Meng,et al."Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network".PLOS ONE 13.11(2018):19.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace