Learning to Play Hard Exploration Games Using Graph-guided Self-navigation
Zhao EM(赵恩民)1,2; Yan RY(闫仁业)1,2; Li K(李凯)2; Li LJ(李丽娟)2; Xing JL(兴军亮)1,2
2021-02
会议日期2021-02
会议地点线上
DOI
英文摘要
Thisworkconsiderstheproblemofdeeprein-
forcementlearning(RL)withlongtimedependenciesands-
parserewards,asarefoundinmanyhardexplorationgames.
Agraph-basedrepresentationisproposedtoallowanagent
toperformself-navigationforenvironmentalexploration.The
graphrepresentationnotonlyeffectivelymodelstheenvironment
structure,butalsoefficientlytracestheagentstatechangesand
thecorrespondingactions.Byencouragingtheagenttoearna
newinfluence-basedcuriosityrewardfornewgameobservations,
thewholeexplorationtaskisdividedintosub-tasks,whichare
effectivelysolvedusingaunifieddeepRLmodel.Experimental
evaluationsonhardexplorationAtariGamesdemonstratethe
effectivenessoftheproposedmethod.Thesourcecodeand
learnedmodelswillbereleasedtofacilitatefurtherstudieson
thisproblem.
学科主题信息科学与系统科学
语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/52241]  
专题融合创新中心_决策指挥与体系智能
通讯作者Xing JL(兴军亮)
作者单位1.SchoolofArtificialIntelligence,UniversityofChineseAcademyofSciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zhao EM,Yan RY,Li K,et al. Learning to Play Hard Exploration Games Using Graph-guided Self-navigation[C]. 见:. 线上. 2021-02.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace