Multi-subject data augmentation for target subject semantic decoding with deep multi-view adversarial learning
Li, Dan1,3,4; Du, Changde1,3,4,5; Wang, Shengpei1,3,4; Wang, Haibao1,3,4; He, Huiguang1,2,3,4
刊名INFORMATION SCIENCES
2021-02-08
卷号547页码:1025-1044
关键词Data augmentation Semantic decoding Multi-view adversarial learning Sparse reconstruction relation
ISSN号0020-0255
DOI10.1016/j.ins.2020.09.012
通讯作者He, Huiguang(huiguang.he@ia.ac.cn)
英文摘要Functional magnetic resonance imaging (fMRI) is widely used in the field of brain semantic decoding. However, as fMRI data acquisition is time-consuming and expensive, the number of samples is usually small in the existing fMRI datasets. It is difficult to build an accurate brain decoding model for a subject with insufficient fMRI data. The majority of semantic decoding methods focus on designing predictive model with limited samples, while less attention is paid to fMRI data augmentation. Leveraging data from related but different subjects can be regarded as a new strategy to improve the performance of predictive model. There are two challenges when using information from different subjects: 1) feature mismatch; 2) distribution mismatch. In this paper, we propose a multi-subject fMRI data augmentation method to address the above two challenges, which can improve the decoding accuracy of the target subject. Specifically, the subject information can be translated from one to another by using multiple subject-specific encoders, decoders and discriminators. The encoder maps each subject to a shared latent space, solving the feature mismatch problem. The decoders and discriminators form multiple generative adversarial network architectures, which solves the distribution mismatch problem. Meanwhile, to ensure that the representation of the latent space preserves information of the input space, our method not only minimizes the local data reconstruction loss, but also preserves the sparse reconstruction (semantic) relation over the whole dataset of the input space. Extensive experiments on three fMRI datasets demonstrate the effectiveness of the proposed method. (C) 2020 Elsevier Inc. All rights reserved.
资助项目National Natural Science Foundationof China[61976209] ; National Natural Science Foundationof China[62020106015] ; CAS Inte rnational Collaboration Key Project[173211KYSB20190024] ; Strategic Priority Research Program of CAS[XDB32040000]
WOS关键词PATTERN-ANALYSIS ; VOXEL SELECTION ; IMAGES
WOS研究方向Computer Science
语种英语
出版者ELSEVIER SCIENCE INC
WOS记录号WOS:000590678700002
资助机构National Natural Science Foundationof China ; CAS Inte rnational Collaboration Key Project ; Strategic Priority Research Program of CAS
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/42680]  
专题类脑智能研究中心_神经计算及脑机交互
通讯作者He, Huiguang
作者单位1.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Inst Automat, Res Ctr Brain Inspired Intelligence, Beijing 100190, Peoples R China
4.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
5.Huawei Cloud BU EI Innovat Lab, Beijing 100085, Peoples R China
推荐引用方式
GB/T 7714
Li, Dan,Du, Changde,Wang, Shengpei,et al. Multi-subject data augmentation for target subject semantic decoding with deep multi-view adversarial learning[J]. INFORMATION SCIENCES,2021,547:1025-1044.
APA Li, Dan,Du, Changde,Wang, Shengpei,Wang, Haibao,&He, Huiguang.(2021).Multi-subject data augmentation for target subject semantic decoding with deep multi-view adversarial learning.INFORMATION SCIENCES,547,1025-1044.
MLA Li, Dan,et al."Multi-subject data augmentation for target subject semantic decoding with deep multi-view adversarial learning".INFORMATION SCIENCES 547(2021):1025-1044.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace