Multimodal deep generative adversarial models for scalable doubly semi-supervised learning
Du, Changde1,3,5,6; Du, Changying4; He, Huiguang1,2,3,6
刊名INFORMATION FUSION
2021-04-01
卷号68页码:118-130
关键词Multiview learning Multimodal fusion Generative adversarial networks Deep generative models Semi-supervised learning
ISSN号1566-2535
DOI10.1016/j.inffus.2020.11.003
通讯作者He, Huiguang(huiguang.he@ia.ac.cn)
英文摘要The comprehensive utilization of incomplete multi-modality data is a difficult problem with strong practical value. Most of the previous multimodal learning algorithms require massive training data with complete modalities and annotated labels, which greatly limits their practicality. Although some existing algorithms can be used to complete the data imputation task, they still have two disadvantages: (1) they cannot control the semantics of the imputed modalities accurately; and (2) they need to establish multiple independent converters between any two modalities when extended to multimodal cases. To overcome these limitations, we propose a novel doubly semi-supervised multimodal learning (DSML) framework. Specifically, DSML uses a modality-shared latent space and multiple modality-specific generators to associate multiple modalities together. Here we divided the shared latent space into two independent parts, the semantic labels and the semantic-free styles, which allows us to easily control the semantics of generated samples. In addition, each modality has its own separate encoder and classifier to infer the corresponding semantic and semantic-free latent variables. The above DSML framework can be adversarially trained by using our specially designed softmax-based discriminators. Large amounts of experimental results show that the DSML obtains better performance than the baselines on three tasks, including semi-supervised classification, missing modality imputation and cross-modality retrieval.
资助项目National Natural Science Foundation of China[61976209] ; National Natural Science Foundation of China[62020106015] ; National Natural Science Foundation of China[61906188] ; Chinese Academy of Sciences (CAS) International Collaboration Key, China[173211KYSB20190024] ; Strategic Priority Research Program of CAS, China[XDB32040000]
WOS研究方向Computer Science
语种英语
出版者ELSEVIER
WOS记录号WOS:000616409600009
资助机构National Natural Science Foundation of China ; Chinese Academy of Sciences (CAS) International Collaboration Key, China ; Strategic Priority Research Program of CAS, China
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/43265]  
专题类脑智能研究中心_神经计算及脑机交互
通讯作者He, Huiguang
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Inst Automat, Res Ctr Brain Inspired Intelligence, Beijing 100190, Peoples R China
4.Huawei Noahs Ark Lab, Beijing 100085, Peoples R China
5.Huawei Cloud BU EI Innovat Lab, Beijing 100085, Peoples R China
6.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Du, Changde,Du, Changying,He, Huiguang. Multimodal deep generative adversarial models for scalable doubly semi-supervised learning[J]. INFORMATION FUSION,2021,68:118-130.
APA Du, Changde,Du, Changying,&He, Huiguang.(2021).Multimodal deep generative adversarial models for scalable doubly semi-supervised learning.INFORMATION FUSION,68,118-130.
MLA Du, Changde,et al."Multimodal deep generative adversarial models for scalable doubly semi-supervised learning".INFORMATION FUSION 68(2021):118-130.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace