CORC  > 湖南大学
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
Jianguo Chen; Kenli Li; Kashif Bilal; xu zhou; Keqin Li; Philip S. Yu
刊名IEEE Transactions on Parallel and Distributed Systems
2019
卷号Vol.30 No.5页码:965-976
关键词Training Computer architecture Computational modeling Parallel processing Task analysis Distributed computing Acceleration Big data bi-layered parallel computing convolutional neural networks deep learning distributed computing
ISSN号1045-9219;1558-2183
URL标识查看原文
公开日期[db:dc_date_available]
内容类型期刊论文
URI标识http://www.corc.org.cn/handle/1471x/4610112
专题湖南大学
作者单位1.College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
2.COMSATS University Islamabad, Abbottabad, Pakistan
3.Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA
推荐引用方式
GB/T 7714
Jianguo Chen,Kenli Li,Kashif Bilal,et al. A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks[J]. IEEE Transactions on Parallel and Distributed Systems,2019,Vol.30 No.5:965-976.
APA Jianguo Chen,Kenli Li,Kashif Bilal,xu zhou,Keqin Li,&Philip S. Yu.(2019).A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.IEEE Transactions on Parallel and Distributed Systems,Vol.30 No.5,965-976.
MLA Jianguo Chen,et al."A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks".IEEE Transactions on Parallel and Distributed Systems Vol.30 No.5(2019):965-976.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace