Improving the Diversity of Bootstrapped DQN by Replacing Priors With Noise

IEEE TRANSACTIONS ON GAMES(2023)

引用 0|浏览13
暂无评分
摘要
Q-learning is one of the most well-known reinforcement learning algorithms. There have been tremendous efforts to develop this algorithm using neural networks. Bootstrapped deep Q-learning network is amongst them. It utilizes multiple neural network heads to introduce diversity into Q-learning. Diversity can sometimes be viewed as the amount of reasonable moves an agent can take at a given state, analogous to the definition of the exploration ratio in RL. Thus, the performance of bootstrapped deep Q-learning network is deeply connected with the level of diversity within the algorithm. In the original research, it was pointed out that a random prior could improve the performance of the model. In this article, we further explore the possibility of replacing priors with noise and sample the noise from a Gaussian distribution to introduce more diversity into this algorithm. We conduct our experiment on the Atari benchmark and compare our algorithm to both the original and other related algorithms. The results show that our modification of the bootstrapped deep Q-learning algorithm achieves significantly higher evaluation scores across different types of Atari games. Thus, we conclude that replacing priors with noise can improve bootstrapped deep Q-learning's performance by ensuring the integrity of diversities.
更多
查看译文
关键词
Atari,convolutional neural networks (CNNs),deep learning (DL),machine learning,reinforcement learning (RL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要