Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks
Wu, Jingya1,2; Lu, Wenyan1,2; Yan, Guihai1,2; Li, Jiajun1,2; Gong, Shijun1,2; Jiang, Shuhao1,2; Li, Xiaowei1,2
刊名IEEE TRANSACTIONS ON COMPUTERS
2019-06-01
卷号68期号:6页码:867-881
关键词Convolutional neural networks accelerator architecture parallelism sparsity
ISSN号0018-9340
DOI10.1109/TC.2018.2890258
英文摘要There are two approaches to improve the performance of Convolutional Neural Networks (CNNs): 1) accelerating computation and 2) reducing the amount of computation. The acceleration approaches take the advantage of CNN computing regularity which enables abundant fine-grained parallelisms in feature maps, neurons, and synapses. Alternatively, reducing computations leverages the intrinsic sparsity of CNN neurons and synapses. The sparsity represents as the computing "bubbles", i. e., zero or tiny-valued neurons and synapses. These bubbles can be removed to reduce the volume of computations. Although distinctly different from each other in principle, we find that the two types of approaches are not orthogonal to each other. Even worse, they may conflict to each other when working together. The conditional branches introduced by some bubble-removing mechanisms in the original computations destroy the regularity of deeply nested loops, thereby impairing the intrinsic parallelisms. Therefore, enabling the synergy between the two types of approaches is critical to arrive at superior performance. This paper proposed a relaxed synchronous computing architecture, FlexFlow-Pro, to fulfill this purpose. Compared with the state-of-the-art accelerators, the FlexFlow-Pro gains more than 2.5 x performance on average and 2x energy efficiency.
资助项目National Natural Science Foundation of China[61572470] ; National Natural Science Foundation of China[61532017] ; National Natural Science Foundation of China[61522406] ; National Natural Science Foundation of China[61872336] ; National Natural Science Foundation of China[61432017] ; National Natural Science Foundation of China[61376043] ; National Natural Science Foundation of China[61521092] ; Youth Innovation Promotion Association, CAS[404441000]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE COMPUTER SOC
WOS记录号WOS:000467523100005
内容类型期刊论文
源URL[http://119.78.100.204/handle/2XEOYT63/4255]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Yan, Guihai; Li, Xiaowei
作者单位1.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China
推荐引用方式
GB/T 7714
Wu, Jingya,Lu, Wenyan,Yan, Guihai,et al. Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks[J]. IEEE TRANSACTIONS ON COMPUTERS,2019,68(6):867-881.
APA Wu, Jingya.,Lu, Wenyan.,Yan, Guihai.,Li, Jiajun.,Gong, Shijun.,...&Li, Xiaowei.(2019).Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks.IEEE TRANSACTIONS ON COMPUTERS,68(6),867-881.
MLA Wu, Jingya,et al."Promoting the Harmony between Sparsity and Regularity: A Relaxed Synchronous Architecture for Convolutional Neural Networks".IEEE TRANSACTIONS ON COMPUTERS 68.6(2019):867-881.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace