Low Latency Spiking ConvNets with Restricted Output Training and False Spike Inhibition | |
Chen RZ(陈睿智); Ma H(马鸿); Guo P(郭鹏); Guo P(郭鹏); Xie SL(谢少林); Wang DL(王东琳) | |
2018-10 | |
会议日期 | 2018-7 |
会议地点 | 巴西里约热内卢 |
英文摘要 | Deep convolutional neural networks (ConvNets) have achieved the state-of-the-art performance on many real-world applications. However, significant computation and storage demands are required by ConvNets. Spiking neural networks (SNNs), with sparsely activated neurons and event-driven computations, show great potential to take advantage of the ultra-low power spike-based hardware architectures. Yet, training SNN with similar accuracy as ConvNets is difficult. Recent researchers have demonstrated the work of converting ConvNets to SNNs (CNN-SNN conversion) with similar accuracy. However, the energy-efficiency of the converted SNNs is impaired by the increased classification latency. In this paper, we focus on optimizing the classification latency of the converted SNNs. First, we propose a restricted output training method to normalize the converted weights dynamically in the CNN-SNN training phase. Second, false spikes are identified and the false spike inhibition theory is derived to speedup the convergence of the classification process. Third, we propose a temporal max pooling method to approximate the max pooling operation in ConvNets without accuracy loss. The evaluation shows that the converted SNNs converge in about 30 time-steps and achieve the best classification accuracy of 94% on CIFAR-10 dataset. |
语种 | 英语 |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/23614] |
专题 | 自动化研究所_国家专用集成电路设计工程技术研究中心 |
作者单位 | 1.中国科学院自动化研究所 2.中国科学院大学 |
推荐引用方式 GB/T 7714 | Chen RZ,Ma H,Guo P,et al. Low Latency Spiking ConvNets with Restricted Output Training and False Spike Inhibition[C]. 见:. 巴西里约热内卢. 2018-7. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论