Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM | |
Cong,Leng; Zesheng,Dou; Hao,Li; Shenghuo,Zhu; Rong,Jin | |
2018-02 | |
会议日期 | 2018年2月2-8日 |
会议地点 | 美国新奥尔良 |
关键词 | Admm Low-bits |
英文摘要 | Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network. |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/26145] |
专题 | 自动化研究所_个人空间 |
通讯作者 | Cong,Leng |
作者单位 | Alibaba Group |
推荐引用方式 GB/T 7714 | Cong,Leng,Zesheng,Dou,Hao,Li,et al. Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM[C]. 见:. 美国新奥尔良. 2018年2月2-8日. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论