What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective
Fu, Di2,3,4; Weber, Cornelius4; Yang, Guochun2,3; Kerzel, Matthias4; Nan, Weizhi1; Barros, Pablo4; Wu, Haiyan2,3; Liu, Xun2,3; Wermter, Stefan4
刊名FRONTIERS IN INTEGRATIVE NEUROSCIENCE
2020-02-27
卷号14页码:18
关键词selective attention visual attention auditory attention crossmodal learning computational modeling deep learning
ISSN号1662-5145
DOI10.3389/fnint.2020.00010
产权排序1
文献子类article
英文摘要

Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.

资助项目National Natural Science Foundation of China (NSFC)[61621136008] ; German Research Foundation (DFG) under project Transregio Crossmodal Learning[TRR 169] ; CAS-DAAD
WOS关键词HUMAN AUDITORY-CORTEX ; SUPERIOR-COLLICULUS ; MULTISENSORY INTEGRATION ; STIMULUS-DRIVEN ; TOP-DOWN ; NEURAL MECHANISMS ; SPATIAL ATTENTION ; COGNITIVE CONTROL ; VISUAL-ATTENTION ; SALIENCY
WOS研究方向Behavioral Sciences ; Neurosciences & Neurology
语种英语
出版者FRONTIERS MEDIA SA
WOS记录号WOS:000526713900001
资助机构National Natural Science Foundation of China (NSFC) ; German Research Foundation (DFG) under project Transregio Crossmodal Learning ; CAS-DAAD
内容类型期刊论文
源URL[http://ir.psych.ac.cn/handle/311026/31552]  
专题心理研究所_中国科学院行为科学重点实验室
通讯作者Liu, Xun
作者单位1.Guangzhou Univ, Sch Educ, Dept Psychol, Ctr Brain & Cognit Sci, Guangzhou, Peoples R China
2.Univ Chinese Acad Sci, Dept Psychol, Beijing, Peoples R China
3.Chinese Acad Sci, Key Lab Behav Sci, Inst Psychol, Beijing, Peoples R China
4.Univ Hamburg, Dept Informat, Hamburg, Germany
推荐引用方式
GB/T 7714
Fu, Di,Weber, Cornelius,Yang, Guochun,et al. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective[J]. FRONTIERS IN INTEGRATIVE NEUROSCIENCE,2020,14:18.
APA Fu, Di.,Weber, Cornelius.,Yang, Guochun.,Kerzel, Matthias.,Nan, Weizhi.,...&Wermter, Stefan.(2020).What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective.FRONTIERS IN INTEGRATIVE NEUROSCIENCE,14,18.
MLA Fu, Di,et al."What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective".FRONTIERS IN INTEGRATIVE NEUROSCIENCE 14(2020):18.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace