Matryoshka Peek: Toward Learning Fine-Grained, Robust, Discriminative Features for Product Search
Kyaw, Zawlin1; Qi, Shuhan3; Gao, Ke4; Zhang, Hanwang1; Zhang, Luming5; Xiao, Jun6; Wang, Xuan2; Chua, Tat-Seng1
刊名IEEE TRANSACTIONS ON MULTIMEDIA
2017-06-01
卷号19期号:6页码:1272-1284
关键词Feature extraction image representation robust learning image retrieval
ISSN号1520-9210
DOI10.1109/TMM.2017.2655422
英文摘要In sharp contrast to the traditional category/subcategory level image retrieval, product image search aims to find the images containing the exact same product. This is a challenging problem because in addition to being robust under different imaging conditions such as varying viewpoints and illumination changes, the features should also be able to distinguish the specific product among many similar products. Consequently, it is important to utilize a large dataset, containing many product classes, to learn a strongly discriminative representation. Building such a dataset requires laborious manual annotation. Toward learning fine-grained, robust, discriminative features for product image search, we present a novel paradigm that can construct the required dataset without any human annotation. Unlike other fine-grained recognition works that rely on high-quality annotated datasets and are very narrowly focused on a specific object category, our method handles multiple object classes and requires minimum human effort. First, an ImageNet pretrained model is used to generate product clusters. As the original features from ImageNet are not discriminative, the clusters generated by this unsupervised procedure contain much noise. We alleviate noise by explicitly modeling noise distribution and automatically detecting errors during learning. The proposed paradigm is general, requires minimum human efforts, and is applicable to any deep learning task where fine-grained discriminative features are desired. Extensive experiments on the ALISC dataset have demonstrated that our approach is sound and effective, surpassing the baseline GoogleNet model by 15.09%.
资助项目National Research Foundation, Prime Minister's Office, Singapore, under its IRC@SG Funding Initiative ; National Nature Science Foundation of China[61525206] ; National Nature Science Foundation of China[61271428] ; National Nature Science Foundation of China[61572169] ; National Nature Science Foundation of China[61472266] ; International Exchange and Cooperation Foundation of Shenzhen City[GJHZ20150312114149569] ; National University of Singapore (Suzhou) Research Institute ; Fundamental Research Funds for the Central Universities
WOS研究方向Computer Science ; Telecommunications
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000404059400013
内容类型期刊论文
源URL[http://119.78.100.204/handle/2XEOYT63/7071]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Qi, Shuhan
作者单位1.Natl Univ Singapore, Sch Comp, Singapore 119077, Singapore
2.Harbin Inst Technol, Comp Applicat Res Ctr, ShenZhen Grad Sch, Shenzhen 518055, Peoples R China
3.Harbin Inst Technol, ShenZhen Grad Sch, Shenzhen 518055, Peoples R China
4.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
5.Hefei Univ Technol, Hefei 132312, Peoples R China
6.Zhejiang Univ, Hangzhou 132312, Zhejiang, Peoples R China
推荐引用方式
GB/T 7714
Kyaw, Zawlin,Qi, Shuhan,Gao, Ke,et al. Matryoshka Peek: Toward Learning Fine-Grained, Robust, Discriminative Features for Product Search[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2017,19(6):1272-1284.
APA Kyaw, Zawlin.,Qi, Shuhan.,Gao, Ke.,Zhang, Hanwang.,Zhang, Luming.,...&Chua, Tat-Seng.(2017).Matryoshka Peek: Toward Learning Fine-Grained, Robust, Discriminative Features for Product Search.IEEE TRANSACTIONS ON MULTIMEDIA,19(6),1272-1284.
MLA Kyaw, Zawlin,et al."Matryoshka Peek: Toward Learning Fine-Grained, Robust, Discriminative Features for Product Search".IEEE TRANSACTIONS ON MULTIMEDIA 19.6(2017):1272-1284.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace