Incremental PCANet: A Lifelong Learning Framework to Achieve the Plasticity of both Feature and Classifier Constructions
Wang-Li Hao; Zhaoxiang Zhang
2016-11-28
会议日期28-30 November 2016
会议地点Beijing, China
关键词Plasticity Lifelong Learning Incremental Pcanet Incremental Svm
英文摘要The plasticity in our brain gives us promising ability to learn and know the world. Although great successes have been achieved in many fields, few bio-inspired methods have mimiced this ability. They are infeasible when the data is time-varying and the scale is large because they need all training data loaded into memory. Furthermore, even the popular deep convolutional neural network (CNN) models have relatively fixed structures. Through incremental PCANet, this paper aims at exploring a lifelong learning framework to achieve the plasticity of both feature and classifier constructions. The proposed model mainly comprises of three parts: Gabor filters followed by maxpooling layer offering shift and scale tolerance to input samples, cascade incremental PCA to achieve the plasticity of feature extraction and incremental SVM to pursue plasticity of classifier construction. Different from CNN, the plasticity in our model has no back propogation (BP) process and don’t need huge parameters. Experiments have been done and their results validate the plasticity of our models in both feature and classifier constructions and further verify the hypothesis of physiology that the plasticity of high layer is better than the low layer.
会议录BICS 2016
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/13248]  
专题自动化研究所_类脑智能研究中心
通讯作者Zhaoxiang Zhang
推荐引用方式
GB/T 7714
Wang-Li Hao,Zhaoxiang Zhang. Incremental PCANet: A Lifelong Learning Framework to Achieve the Plasticity of both Feature and Classifier Constructions[C]. 见:. Beijing, China. 28-30 November 2016.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace