Long Time Sequential Task Learning From Unstructured Demonstration | |
Zhang HW(张会文)1,2,3; Liu YW(刘玉旺)2,3 | |
刊名 | IEEE ACCESS |
2019 | |
卷号 | 7页码:96240-96252 |
关键词 | Bayesian segmentation imitation learning KL divergence movement primitives mixture model |
ISSN号 | 2169-3536 |
产权排序 | 1 |
英文摘要 | Learning from demonstration (LfD), which provides a natural way to transfer skills to robots, has been extensively researched for decades, and an army of methods and applications have been developed and investigated for learning an individual or low-level task. Nevertheless, learning long time sequential tasks is still very difficult as it involves task segmentation and sub-task clustering under an extremely large demonstration variance. Besides, the representation problem should be considered when doing segmentation. This paper presents a new unified framework to solve the problems of segmentation, clustering, and representation in a sequential task. The segmentation algorithm segments unstructured demonstrations into movement primitives (MPs). Then, the MPs are automatically clustered and labeled so that they can be reused in other tasks. Finally, the representation model is leveraged to encode and generalize the learned MPs in new contexts. To achieve the first goal, a change-point detection algorithm based on Bayesian inference is leveraged. It can segment unstructured demonstrations online with minimum prior knowledge requirements. By following the Gaussian distributed assumption in the segmentation model, MPs are encoded by Gaussians or Gaussian mixture models. Thus, the clustering of MPs is formulated as a clustering over cluster (CoC) problem. The Kullback-Leibler divergence is used to measure similarities between MPs, through which the MPs with smaller distance are clustered into the same group. To replay and generalize the task in novel contexts, we use task-parameterized regression models such as the Gaussian mixture regression. We implemented our framework on a sequential open-and-place task. The experiments demonstrate that the segmentation accuracy of our framework can reach 94.3% and the recognition accuracy can reach 97.1%. Comparisons with the state-of-the-art algorithm also indicate that our framework is superior or comparable to their results. |
资助项目 | National Natural Science Foundation of China[51605474] ; National Natural Science Foundation of China[61821005] |
WOS关键词 | ONLINE SEGMENTATION ; MOVEMENT ; PRIMITIVES ; IMITATION ; MOTION ; SKILLS |
WOS研究方向 | Computer Science ; Engineering ; Telecommunications |
语种 | 英语 |
WOS记录号 | WOS:000478961900067 |
内容类型 | 期刊论文 |
源URL | [http://ir.sia.cn/handle/173321/25461] |
专题 | 沈阳自动化研究所_空间自动化技术研究室 |
通讯作者 | Zhang HW(张会文) |
作者单位 | 1.Shenyang Institute of Automation, University of Chinese Academy of Sciences, Beijing 100049, China 2.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China 3.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China |
推荐引用方式 GB/T 7714 | Zhang HW,Liu YW. Long Time Sequential Task Learning From Unstructured Demonstration[J]. IEEE ACCESS,2019,7:96240-96252. |
APA | Zhang HW,&Liu YW.(2019).Long Time Sequential Task Learning From Unstructured Demonstration.IEEE ACCESS,7,96240-96252. |
MLA | Zhang HW,et al."Long Time Sequential Task Learning From Unstructured Demonstration".IEEE ACCESS 7(2019):96240-96252. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论