Event-Driven Network for Cross-Modal Retrieval
Zhixiong Zeng; Nan Xu; Wenji Mao
2020-10
会议日期2020.10.19
会议地点Virtual Event
英文摘要

Despite extensive research on cross-modal retrieval, existing methods focus on the matching between image objects and text words. However, for the large amount of social media, such as news reports and online posts with images, previous methods are insufficient to model the associations between long text and image. As long text contains multiple entities and relationships between them, as well as complex events sharing a common scenario of the text, it poses unique research challenge to cross-modal retrieval. To tackle the challenge, in this paper, we focus on the retrieval task on long text and image, and propose an event-driven network for cross-modal retrieval. Our approach consists of two modules, namely the contextual neural tensor network (CNTN) and cross-modal matching network (CMMN). The CNTN module captures both event-level and text-level semantics of the sequential events extracted from a long text. The CMMN module learns a common representation space to compute the similarity of image and text modalities. We construct a multimodal dataset based on the news reports in People's Daily. The experimental results demonstrate that our model outperforms the existing state-of-the-art methods and can provide semantic richer text representations to enhance the effectiveness in cross-modal retrieval.
 

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/51969]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心
作者单位1.Shenzhen Artifcial Intelligence and Data Science Research Institute (Longhua)
2.School of Artifcial Intelligence, University of Chinese Academy of Sciences
3.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zhixiong Zeng,Nan Xu,Wenji Mao. Event-Driven Network for Cross-Modal Retrieval[C]. 见:. Virtual Event. 2020.10.19.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace