On bonus-based exploration methods

semanticscholar(2020)

Cited 1|Views1
No score
Abstract
Research on exploration in reinforcement learning, as applied to Atari 2600 gameplaying, has emphasized tackling difficult exploration problems such as MONTEZUMA’S REVENGE (Bellemare et al., 2016). Recently, bonus-based exploration methods, which explore by augmenting the environment reward, have reached above-human average performance on such domains. In this paper we reassess popular bonus-based exploration methods within a common evaluation framework. We combine Rainbow (Hessel et al., 2018) with different exploration bonuses and evaluate its performance on MONTEZUMA’S REVENGE, Bellemare et al.’s set of hard of exploration games with sparse rewards, and the whole Atari 2600 suite. We find that while exploration bonuses lead to higher score on MONTEZUMA’S REVENGE they do not provide meaningful gains over the simpler -greedy scheme. In fact, we find that methods that perform best on that game often underperform -greedy on easy exploration Atari 2600 games. We find that our conclusions remain valid even when hyperparameters are tuned for these easy-exploration games. Finally, we find that none of the methods surveyed benefit from additional training samples (1 billion frames, versus Rainbow’s 200 million) on Bellemare et al.’s hard exploration games. Our results suggest that recent gains in MONTEZUMA’S REVENGE may be better attributed to architecture change, rather than better exploration schemes; and that the real pace of progress in exploration research for Atari 2600 games may have been obfuscated by good results on a single domain.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined