Sparsity-Inducing Binarized Neural Networks
Wang, Peisong1,2; He, Xiangyu1,2; Li, Gang1,2; Zhao, Tianli1,2; Cheng, Jian1,2
2020
会议日期2020
会议地点New York
英文摘要

Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-BNN), to quantize the activations to be either 0 or+ 1, which introduces sparsity into binary representation. We further introduce trainable thresholds into the backward function of binarization to guide the gradient propagation. Our method dramatically outperforms current state-ofthe-arts, lowering the performance gap between full-precision networks and BNNs on mainstream architectures, achieving the new state-of-the-art on binarized AlexNet (Top-1 50.5%), ResNet-18 (Top-1 59.7%), and VGG-Net (Top-1 63.2%). At inference time, Si-BNN still enjoys the high efficiency of exclusive-not-or (xnor) operations.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/40621]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Cheng, Jian
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Wang, Peisong,He, Xiangyu,Li, Gang,et al. Sparsity-Inducing Binarized Neural Networks[C]. 见:. New York. 2020.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace