Learning Aligned Image-Text Representations Using Graph Attentive Relational Network
Jing, Ya2,3; Wang, Wei2,3; Wang, Liang1,2,3; Tan, Tieniu1,2,3
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
2021
期号30页码:1840-1852
关键词Graph neural networks Visualization Semantics Task analysis Feature extraction Annotations Recurrent neural networks Image-text matching cross-modal retrieval person search graph neural network
ISSN号1057-7149
DOI10.1109/TIP.2020.3048627
英文摘要

Image-text matching aims to measure the similarities between images and textual descriptions, which has made great progress recently. The key to this cross-modal matching task is to build the latent semantic alignment between visual objects and words. Due to the widespread variations of sentence structures, it is very difficult to learn the latent semantic alignment using only global cross-modal features. Many previous methods attempt to learn the aligned image-text representations by the attention mechanism but generally ignore the relationships within textual descriptions which determine whether the words belong to the same visual object. In this paper, we propose a graph attentive relational network (GARN) to learn the aligned image-text representations by modeling the relationships between noun phrases in a text for the identity-aware image-text matching. In the GARN, we first decompose images and texts into regions and noun phrases, respectively. Then a skip graph neural network (skip-GNN) is proposed to learn effective textual representations which are a mixture of textual features and relational features. Finally, a graph attention network is further proposed to obtain the probabilities that the noun phrases belong to the image regions by modeling the relationships between noun phrases. We perform extensive experiments on the CUHK Person Description dataset (CUHK-PEDES), Caltech-UCSD Birds dataset (CUB), Oxford-102 Flowers dataset and Flickr30K dataset to verify the effectiveness of each component in our model. Experimental results show that our approach achieves the state-of-the-art results on these four benchmark datasets.

资助项目National Key Research and Development Program of China[2016YFB1001000] ; National Natural Science Foundation of China[61976214] ; National Natural Science Foundation of China[61721004] ; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project)[2019JZZY010119]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000611077900003
资助机构National Key Research and Development Program of China ; National Natural Science Foundation of China ; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project)
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/42891]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Wang, Wei
作者单位1.Chinese Acad Sci CASIA, Inst Automat, Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
2.Chinese Acad Sci CASIA, Inst Automat, Ctr Res Intelligent Percept & Comp, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Beijing 100044, Peoples R China
推荐引用方式
GB/T 7714
Jing, Ya,Wang, Wei,Wang, Liang,et al. Learning Aligned Image-Text Representations Using Graph Attentive Relational Network[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2021(30):1840-1852.
APA Jing, Ya,Wang, Wei,Wang, Liang,&Tan, Tieniu.(2021).Learning Aligned Image-Text Representations Using Graph Attentive Relational Network.IEEE TRANSACTIONS ON IMAGE PROCESSING(30),1840-1852.
MLA Jing, Ya,et al."Learning Aligned Image-Text Representations Using Graph Attentive Relational Network".IEEE TRANSACTIONS ON IMAGE PROCESSING .30(2021):1840-1852.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace