Lifelong robotic visual-tactile perception learning
Dong JH(董家华)1,2,3; Cong Y(丛杨)1,2; Sun G(孙干)1,2; Zhang T(张涛)1,2,3
刊名Pattern Recognition
2022
卷号121页码:1-12
关键词Lifelong machine learning Robotics Visual-tactile perception Cross-modality learning Multi-task learning
ISSN号0031-3203
产权排序1
英文摘要

Lifelong machine learning can learn a sequence of consecutive robotic perception tasks via transferring previous experiences. However, 1) most existing lifelong learning based perception methods only take advantage of visual information for robotic tasks, while neglecting another important tactile sensing modality to capture discriminative material properties; 2) Meanwhile, they cannot explore the intrinsic relationships across different modalities and the common characterization among different tasks of each modality, due to the distinct divergence between heterogeneous feature distributions. To address above challenges, we propose a new Lifelong Visual-Tactile Learning (LVTL) model for continuous robotic visual-tactile perception tasks, which fully explores the latent correlations in both intra-modality and cross-modality aspects. Specifically, a modality-specific knowledge library is developed for each modality to explore common intra-modality representations across different tasks, while narrowing intra-modality mapping divergence between semantic and feature spaces via an auto-encoder mechanism. Moreover, a sparse constraint based modality-invariant space is constructed to capture underlying cross-modality correlations and identify the contributions of each modality for new coming visual-tactile tasks. We further propose a modality consistency regularizer to efficiently align the heterogeneous visual and tactile samples, which ensures the semantic consistency between different modality-specific knowledge libraries. After deriving an efficient model optimization strategy, we conduct extensive experiments on several representative datasets to demonstrate the superiority of our LVTL model. Evaluation experiments show that our proposed model significantly outperforms existing state-of-the-art methods with about 1.16%∼15.36% improvement under different lifelong visual-tactile perception scenarios.

资助项目National Key Research and Development Program of China[2019YFB1310300] ; National Nature Science Foundation of China[61821005] ; National Nature Science Foundation of China[62003336] ; National Postdoctoral Innovative Talents Support Program[BX20200353] ; Nature Foundation of Liaoning Province of China[2020-KF-11-01]
WOS关键词FUSION
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000701148300015
资助机构National Key Research and Development Program of China under Grant 2019YFB1310300 ; National Nature Science Foundation of China under Grant 61821005, and 62003336 ; National Postdoctoral Innovative Talents Support Program (BX20200353) ; Nature Foundation of Liaoning Province of China under Grant 2020-KF-11-01
内容类型期刊论文
源URL[http://ir.sia.cn/handle/173321/29387]  
专题沈阳自动化研究所_机器人学研究室
通讯作者Cong Y(丛杨)
作者单位1.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China
2.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
3.University of Chinese Academy of Sciences, Beijing 100049, China
推荐引用方式
GB/T 7714
Dong JH,Cong Y,Sun G,et al. Lifelong robotic visual-tactile perception learning[J]. Pattern Recognition,2022,121:1-12.
APA Dong JH,Cong Y,Sun G,&Zhang T.(2022).Lifelong robotic visual-tactile perception learning.Pattern Recognition,121,1-12.
MLA Dong JH,et al."Lifelong robotic visual-tactile perception learning".Pattern Recognition 121(2022):1-12.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace