Mask-guided contrastive attention and two-stream metric co-learning for person Re-identification
Song, Chunfeng1,2; Shan, Caifeng3,4; Huang, Yan1,2; Wang, Liang1,2,4
刊名NEUROCOMPUTING
2021-11-20
卷号465页码:561-573
关键词Person ReID Contrastive attention model Two-stream metric learning
ISSN号0925-2312
DOI10.1016/j.neucom.2021.09.038
通讯作者Shan, Caifeng(caifeng.shan@gmail.com)
英文摘要Person Re-identification (ReID) is an important yet challenging task in computer vision. Due to diverse background clutters, variations of viewpoints and body poses, it is far from being solved. How to extract discriminative and robust features invariant to background clutters is one of the core problems. In this paper, we first introduce a set of binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, and then design a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions. Moreover, we propose a novel region-level triplet loss to guide the features learning, i.e., pulling the features from the full image and body region close, whereas pushing the features from backgrounds away. To learn the similarities from multiple features of the proposed MGCAM, we further introduce the instance-level two-stream metric co-learning (TSMCL) to help learn pair-wise relations between features from not only different regions but also different instances. TSMCL could help learn more compact features across full and body streams, enhancing the performance of MGCAM. We evaluate the proposed method on four public datasets, including MARS, Market-1501, CUHK03, and DukeMTMC-reID. Extensive experiments show that the proposed method is effective and achieves satisfying results. (c) 2021 Elsevier B.V. All rights reserved.
资助项目National Natural Science Foundation of China[62006231] ; National Natural Science Foundation of China[61525306] ; National Natural Science Foundation of China[61633021] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61420106015] ; National Key Research and Development Program of China[2016YFB1001000] ; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project)[2019JZZY010-119] ; Capital Science and Technology Leading Talent Training Project[Z181100006318030] ; NVIDIA ; NVIDIA DGX-1 AI Supercomputer
WOS关键词NETWORK
WOS研究方向Computer Science
语种英语
出版者ELSEVIER
WOS记录号WOS:000702819100009
资助机构National Natural Science Foundation of China ; National Key Research and Development Program of China ; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) ; Capital Science and Technology Leading Talent Training Project ; NVIDIA ; NVIDIA DGX-1 AI Supercomputer
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/45735]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Shan, Caifeng
作者单位1.Chinese Acad Sci CASIA, Inst Automat, Ctr Res Intelligent Percept & Comp CRIPAC, Natl Lab Pattern Recognit NLPR, Beijing, Peoples R China
2.Univ Chinese Acad Sci UCAS, Beijing, Peoples R China
3.Shandong Univ Sci & Technol, Qingdao, Peoples R China
4.CAS CAS AIR, Artificial Intelligence Res, Qingdao, Peoples R China
推荐引用方式
GB/T 7714
Song, Chunfeng,Shan, Caifeng,Huang, Yan,et al. Mask-guided contrastive attention and two-stream metric co-learning for person Re-identification[J]. NEUROCOMPUTING,2021,465:561-573.
APA Song, Chunfeng,Shan, Caifeng,Huang, Yan,&Wang, Liang.(2021).Mask-guided contrastive attention and two-stream metric co-learning for person Re-identification.NEUROCOMPUTING,465,561-573.
MLA Song, Chunfeng,et al."Mask-guided contrastive attention and two-stream metric co-learning for person Re-identification".NEUROCOMPUTING 465(2021):561-573.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace