Learning Multimodal Taxonomy via Variational Deep Graph Embedding and Clustering
Huaiwen Zhang1,2; Quan Fang1,2; Shengsheng Qian1,2; Changsheng Xu1,2
2018-10
会议日期October 22 - 26, 2018
会议地点Seoul, Republic of Korea
英文摘要

Taxonomy learning is an important problem and facilitates various applications such as semantic understanding and information retrieval. Previous work for building semantic taxonomies has primarily relied on labor-intensive human contributions or focused on text-based extraction. In this paper, we investigate the problem of automatically learning multimodal taxonomies from the multimedia data on the Web. A systematic framework called Variational Deep Graph Embedding and Clustering (VDGEC) is proposed consisting of two stages as concept graph construction and taxonomy induction via variational deep graph embedding and clustering. VDGEC discovers hierarchical concept relationships by exploiting the semantic textual-visual correspondences and contextual co-occurrences in an unsupervised manner. The unstructured semantics and noisy issues of multimedia documents are carefully addressed by VDGEC for high quality taxonomy induction. We conduct extensive experiments on the real-world datasets. Experimental results demonstrate the effectiveness of the proposed framework, where VDGEC outperforms previous unsupervised approaches by a large gap.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/25824]  
专题多媒体计算与图形学团队
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Huaiwen Zhang,Quan Fang,Shengsheng Qian,et al. Learning Multimodal Taxonomy via Variational Deep Graph Embedding and Clustering[C]. 见:. Seoul, Republic of Korea. October 22 - 26, 2018.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace