A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks | |
Jianguo Chen; Kenli Li; Kashif Bilal; xu zhou; Keqin Li; Philip S. Yu | |
刊名 | IEEE Transactions on Parallel and Distributed Systems |
2019 | |
卷号 | Vol.30 No.5页码:965-976 |
关键词 | Training Computer architecture Computational modeling Parallel processing Task analysis Distributed computing Acceleration Big data bi-layered parallel computing convolutional neural networks deep learning distributed computing |
ISSN号 | 1045-9219;1558-2183 |
URL标识 | 查看原文 |
公开日期 | [db:dc_date_available] |
内容类型 | 期刊论文 |
URI标识 | http://www.corc.org.cn/handle/1471x/4610112 |
专题 | 湖南大学 |
作者单位 | 1.College of Computer Science and Electronic Engineering, Hunan University, Changsha, China 2.COMSATS University Islamabad, Abbottabad, Pakistan 3.Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA |
推荐引用方式 GB/T 7714 | Jianguo Chen,Kenli Li,Kashif Bilal,et al. A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks[J]. IEEE Transactions on Parallel and Distributed Systems,2019,Vol.30 No.5:965-976. |
APA | Jianguo Chen,Kenli Li,Kashif Bilal,xu zhou,Keqin Li,&Philip S. Yu.(2019).A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.IEEE Transactions on Parallel and Distributed Systems,Vol.30 No.5,965-976. |
MLA | Jianguo Chen,et al."A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks".IEEE Transactions on Parallel and Distributed Systems Vol.30 No.5(2019):965-976. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论