Cross-Modality Paired-Images Generation and Augmentation for RGB-Infrared Person Re-Identification
Guan'an Wangang1,4; Yang Yang4; Tianzhu Zhang2; Jian Cheng1,3,4; Zengguang Hou1,3,4; Prayag Tiwari5; Hari Mohan Pandey6
刊名Neural Networks
2020
卷号128期号:128页码:294-304
关键词Person re-identification Cross-modality Feature disentanglement Image generation Adversarial learning
ISSN号0893-6080
DOI10.1016/j.neunet.2020.05.008
通讯作者Hou, Zengguang(zengguang.hou@ia.ac.cn)
英文摘要

RGB-Infrared (IR) person re-identification is very challenging due to the large cross-modality variations
between RGB and IR images. Considering no correspondence labels between every pair of RGB and IR
images, most methods try to alleviate the variations with set-level alignment by reducing marginal
distribution divergence between the entire RGB and IR sets. However, this set-level alignment strategy
may lead to misalignment of some instances, which limit the performance for RGB–IR Re-ID. Different
from existing methods, in this paper, we propose to generate cross-modality paired-images and
perform both global set-level and fine-grained instance-level alignments. Our proposed method enjoys
several merits. First, our method can perform set-level alignment by disentangling modality-specific
and modality-invariant features. Compared with conventional methods, ours can explicitly remove
the modality-specific features and the modality variation can be better reduced. Second, given crossmodality
unpaired-images of a person, our method can generate cross-modality paired images from
exchanged features. With them, we can directly perform instance-level alignment by minimizing
distances of every pair of images. Third, our method learns a latent manifold space. In the space,
we can random sample and generate lots of images of unseen classes. Training with those images,
the learned identity feature space is more smooth can generalize better when test. Finally, extensive
experimental results on two standard benchmarks demonstrate that the proposed model favorably
against state-of-the-art methods.

资助项目National Natural Science Foundation of China[61720106012] ; National Natural Science Foundation of China[61533016] ; National Natural Science Foundation of China[61806203] ; Strategic Priority Research Program of Chinese Academy of Science[XDBS01000000] ; Beijing Natural Science Foundation[L172050]
WOS关键词NETWORKS
WOS研究方向Computer Science ; Neurosciences & Neurology
语种英语
出版者PERGAMON-ELSEVIER SCIENCE LTD
WOS记录号WOS:000567770800006
资助机构National Natural Science Foundation of China ; Strategic Priority Research Program of Chinese Academy of Science ; Beijing Natural Science Foundation
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/41444]  
专题自动化研究所_模式识别国家重点实验室_生物识别与安全技术研究中心
通讯作者Zengguang Hou
作者单位1.University of Chinese Academy of Sciences
2.University of Science and Technology of China
3.Center for Excellence in Brain Science and Intelligence Technology
4.Institute of Automation Chinese Academy of Science (CASIA);
5.University of Padova
6.Edge Hill University
推荐引用方式
GB/T 7714
Guan'an Wangang,Yang Yang,Tianzhu Zhang,et al. Cross-Modality Paired-Images Generation and Augmentation for RGB-Infrared Person Re-Identification[J]. Neural Networks,2020,128(128):294-304.
APA Guan'an Wangang.,Yang Yang.,Tianzhu Zhang.,Jian Cheng.,Zengguang Hou.,...&Hari Mohan Pandey.(2020).Cross-Modality Paired-Images Generation and Augmentation for RGB-Infrared Person Re-Identification.Neural Networks,128(128),294-304.
MLA Guan'an Wangang,et al."Cross-Modality Paired-Images Generation and Augmentation for RGB-Infrared Person Re-Identification".Neural Networks 128.128(2020):294-304.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace