Learning to Reweight Imaginary Transitions for Model-Based Reinforcement Learning
Huang, Wenzhen1,2; Yin Qiyue1,2; Zhang Junge1,2; Huang, Kaiqi1,2,3
2021-02
会议日期2021-2
会议地点online
英文摘要

Model-based reinforcement learning (RL) is more sample efficient than model-free RL by using imaginary trajectories generated by the learned dynamics model. When the model is inaccurate or biased, imaginary trajectories may be deleterious for training the action-value and policy functions. To alleviate such problem, this paper proposes to adaptively
reweight the imaginary transitions, so as to reduce the negative effects of poorly generated trajectories. More specifically, we evaluate the effect of an imaginary transition by calculating the change of the loss computed on the real samples when we use the transition to train the action-value
and policy functions. Based on this evaluation criterion, we construct the idea of reweighting each imaginary transition by a well-designed meta-gradient algorithm. Extensive experimental results demonstrate that our method outperforms state-of-the-art model-based and model-free RL algorithms on multiple tasks. Visualization of our changing weights further validates the necessity of utilizing reweight scheme.
 

会议录出版者AAAI Press
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/46602]  
专题自动化研究所_智能感知与计算研究中心
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.CRISE, Institute of Automation, Chinese Academy of Sciences, Beijing, China
3.CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China
推荐引用方式
GB/T 7714
Huang, Wenzhen,Yin Qiyue,Zhang Junge,et al. Learning to Reweight Imaginary Transitions for Model-Based Reinforcement Learning[C]. 见:. online. 2021-2.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace