基于改进强化学习的模块化自重构机器人编队
投稿时间:2021-09-19  修订日期:2021-10-27  点此下载全文
引用本文:
摘要点击次数: 76
全文下载次数: 0
作者单位邮编
李伟科 五邑大学 529020
岳洪伟 五邑大学 
王宏民 五邑大学 
杨勇 深圳市杉川机器人有限公司 
赵敏 深圳市人工智能与机器人研究院 
邓辅秦* 五邑大学 529020
基金项目:国家高技术研究发展计划(863计划)
中文摘要:针对传统强化学习算法在训练初期缺乏对周围环境的先验知识,模块化自重构机器人(MSRR)会随机选择动作,导致迭代次数浪费和算法收敛速度缓慢的问题,提出一种两阶段强化学习算法。在第一阶段,利用基于群体和知识共享的Q-learning训练机器人前往网格地图的中心点,以获得一个最优共享Q表。在这个阶段中,为了减少迭代次数,提高算法的收敛速度,引入了曼哈顿距离作为奖赏值,以引导机器人向有利于中心点方向移动,减小稀疏奖励的影响。在第二阶段,机器人根据这个最优共享Q表和当前所处的位置,找到前往指定目标点的最优路径,形成指定的队形。实验结果表明,在50×50的网格地图中,与对比算法相比,该算法成功训练机器人到达指定目标点,减少了将近50%的总探索步数。此外,当机器人进行队形转换时,编队运行时间减少了近5倍。
中文关键词:模块化自重构机器人  强化学习  多智能体  编队
 
Formation of Modular Self-reconfigurable Robots Based on Improved Reinforcement Learning
Abstract:Based on the traditional reinforcement learning algorithm, due to a lack of prior knowledge of the surrounding environment, the modular self-reconfigurable robot (MSRR) will randomly select actions, resulting in a waste of iterations and slow convergence. A two-stage reinforcement learning algorithm is proposed. In the first stage, based on knowledge sharing among robots, the improved Q-learning algorithm is proposed to speed up the training process and obtain the optimal Q table. In this stage, to reduce the number of iterations and improve the convergence speed of the algorithm, Manhattan distance is introduced as the reward value to guide the robot to move in the direction favorable to the center point and reduce the influence of sparse reward. In the second stage, according to the resulting Q table and the current position, each robot finds the optimal path to the specified target point and forms the specified formation. The experimental results show that in a 50×50 grid map, compared with the comparison algorithm, the algorithm successfully trains the robots to reach the specified target points, reducing the total number of exploration steps by nearly 50%. In addition, when the robots perform formation switching, the formation runtime is reduced by nearly five times.
keywords:modular self-reconfigurable robots, reinforcement learning, multi-agent, formation
查看全文   查看/发表评论   下载pdf阅读器