A CNN-Transformer Hybrid Recognition Approach for sEMG-Based Dynamic Gesture Prediction
Liu, Yanhong3; Li, Xingyu3; Yang, Lei3; Bian, Guibin1,2; Yu, Hongnian3,4
刊名IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
2023
卷号72页码:16
关键词Feature extraction Gesture recognition Task analysis Transformers Time-frequency analysis Convolution Data mining Convolutional neural network (CNN) feature fusion hand gesture recognition surface electromyography (sEMG) sensor transformer
ISSN号0018-9456
DOI10.1109/TIM.2023.3273651
通讯作者Yang, Lei(leiyang2019@zzu.edu.cn)
英文摘要As a unique physiological electrical signal in the human body, surface electromyography (sEMG) signals always include human movement intention and muscle state. Through the collection of sEMG signals, different gestures can be effectively recognized. At present, the convolutional neural network (CNN) has been widely applied to different gesture recognition systems. However, due to its inherent limitations in global context feature extraction, it exists a certain shortcoming on high-precision prediction tasks. To solve this issue, a CNN-transformer hybrid recognition approach is proposed for high-precision dynamic gesture prediction. In addition, the continuous wavelet transform (CWT) is proposed for to acquire the time-frequency maps. To realize effective feature representation of local features from the time-frequency maps, an attention fusion block (AFB) is proposed to build the deep CNN network branch to effectively extract key channel information and spatial information from local features. Faced with the inherent limitations in global context feature extraction of CNNs, a transformer network branch is proposed to model the global relationship between pixels, called convolution and transformer (CAT) network branch. In addition, a multiscale feature attention (MFA) block is proposed for effective feature aggregation of local features and global contexts by learning adaptive multiscale features and suppressing irrelevant scale information. The experimental results on the established multichannel sEMG signal time-frequency map dataset show that the proposed CNN transformer hybrid recognition network has competitive recognition performance compared with other state-of-the-art recognition networks, and the average recognition speed of each spectrogram on the test set is only 14.7 ms. The proposed network can effectively improve network performance and identification efficiency.
资助项目National Key Research and Development Project of China[2020YFB1313701] ; National Natural Science Foundation of China[62003309] ; Outstanding Foreign Scientist Support Project in Henan Province of China[GZS2019008]
WOS关键词HAND ; SIGNALS ; ROBUST
WOS研究方向Engineering ; Instruments & Instrumentation
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001000275700017
资助机构National Key Research and Development Project of China ; National Natural Science Foundation of China ; Outstanding Foreign Scientist Support Project in Henan Province of China
内容类型期刊论文
源URL[http://ir.ia.ac.cn/handle/173211/53543]  
专题多模态人工智能系统全国重点实验室
通讯作者Yang, Lei
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
2.Zhengzhou Univ, Sch Elect Engn, Zhengzhou 450001, Henan, Peoples R China
3.Zhengzhou Univ, Sch Elect & Informat Engn, Zhengzhou 450001, Henan, Peoples R China
4.Edinburgh Napier Univ, Sch Engn & Built Environm, Edinburgh EH10 5DT, Scotland
推荐引用方式
GB/T 7714
Liu, Yanhong,Li, Xingyu,Yang, Lei,et al. A CNN-Transformer Hybrid Recognition Approach for sEMG-Based Dynamic Gesture Prediction[J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT,2023,72:16.
APA Liu, Yanhong,Li, Xingyu,Yang, Lei,Bian, Guibin,&Yu, Hongnian.(2023).A CNN-Transformer Hybrid Recognition Approach for sEMG-Based Dynamic Gesture Prediction.IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT,72,16.
MLA Liu, Yanhong,et al."A CNN-Transformer Hybrid Recognition Approach for sEMG-Based Dynamic Gesture Prediction".IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 72(2023):16.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace