Continuous Reinforcement Learning with Knowledge-Inspired Reward Shaping for Autonomous Cavity Filter Tuning | |
Zhiyang Wang; Yongsheng Ou; Xinyu Wu; Wei Feng | |
2018 | |
会议日期 | 2018 |
会议地点 | shenzhen |
英文摘要 | Reinforcement Learning has achieved a great success in recent decades when applying to the fields such as finance, robotics, and multi-agent games. A variety of traditional manual tasks are facing upgrading, and reinforcement learning opens the door to a whole new world for improving these tasks. In this paper, we focus on the task called Cavity Filter Tuning, a traditionally manual work in communication industries which not only consumes time, but also highly depends on human knowledge. We present a framework based on Deep Deterministic Policy Gradient for automatically tuning cavity filters, and design appropriate reward functions inspired by human expertise in the tuning task. Simulation experiments are conducted to validate the applicability of our algorithm. Our proposed method is able to autonomously tune a detuned filter to meet the design specifications from any random starting positions. |
内容类型 | 会议论文 |
源URL | [http://ir.siat.ac.cn:8080/handle/172644/13835] |
专题 | 深圳先进技术研究院_集成所 |
推荐引用方式 GB/T 7714 | Zhiyang Wang,Yongsheng Ou,Xinyu Wu,et al. Continuous Reinforcement Learning with Knowledge-Inspired Reward Shaping for Autonomous Cavity Filter Tuning[C]. 见:. shenzhen. 2018. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论