Part-aligned pose-guided recurrent network for action recognition | |
Huang, Linjiang1,2; Huang, Yan1,2; Ouyang, Wanli4; Wang, Liang1,2,3 | |
刊名 | PATTERN RECOGNITION |
2019-08-01 | |
卷号 | 92页码:165-176 |
关键词 | Action recognition Part alignment Auto-transformer attention |
ISSN号 | 0031-3203 |
DOI | 10.1016/j.patcog.2019.03,010 |
通讯作者 | Wang, Liang(wangliang@nlpr.ia.ac.cn) |
英文摘要 | Action recognition using pose information has drawn much attention recently. However, most previous approaches treat human pose as a whole or just use pose to extract robust features. Actually, human body parts play an important role in action, and so modeling spatio-temporal information of body parts can effectively assist in classifying actions. In this paper, we propose a Part-aligned Pose-guided Recurrent Network ((PRN)-R-2) for action recognition. The model mainly consists of two modules, i.e., part alignment module and part pooling module, which are used for part representation learning and part-related feature fusion, respectively. The part-alignment module incorporates an auto-transformer attention, aiming to capture spatial configuration of body parts and predict pose attention maps. While the part pooling module exploits both symmetry and complementarity of body parts to produce fused body representation. The whole network is a recurrent network which can exploit the body representation and simultaneously model spatio-temporal evolutions of human body parts. Experiments on two publicly available benchmark datasets show the state-of-the-art performance and demonstrate the power of the two proposed modules. (C) 2019 Elsevier Ltd. All rights reserved. |
资助项目 | National Key Research and Development Program of China[2016YFB1001000] ; National Natural Science Foundation of China[61525306] ; National Natural Science Foundation of China[61633021] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61420106015] ; National Natural Science Foundation of China[61806194] ; Capital Science and Technology Leading Talent Training Project[Z181100006318030] ; Beijing Science and Technology Project[Z181100008918010] |
WOS关键词 | REPRESENTATION ; HISTOGRAMS |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
出版者 | ELSEVIER SCI LTD |
WOS记录号 | WOS:000468013000014 |
资助机构 | National Key Research and Development Program of China ; National Natural Science Foundation of China ; Capital Science and Technology Leading Talent Training Project ; Beijing Science and Technology Project |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/24248] |
专题 | 中国科学院自动化研究所 |
通讯作者 | Wang, Liang |
作者单位 | 1.Univ Chinese Acad Sci, Beijing, Peoples R China 2.NLPR, CRIPAC, Beijing, Peoples R China 3.Chinese Acad Sci CASIA, Inst Automat, Ctr Excellence Brain Sci & Intelligence Technol C, Beijing, Peoples R China 4.Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW, Australia |
推荐引用方式 GB/T 7714 | Huang, Linjiang,Huang, Yan,Ouyang, Wanli,et al. Part-aligned pose-guided recurrent network for action recognition[J]. PATTERN RECOGNITION,2019,92:165-176. |
APA | Huang, Linjiang,Huang, Yan,Ouyang, Wanli,&Wang, Liang.(2019).Part-aligned pose-guided recurrent network for action recognition.PATTERN RECOGNITION,92,165-176. |
MLA | Huang, Linjiang,et al."Part-aligned pose-guided recurrent network for action recognition".PATTERN RECOGNITION 92(2019):165-176. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论