AdaStop: adaptive statistical testing for sound comparisons of Deep RL agents
CoRR(2023)
摘要
Recently, the scientific community has questioned the statistical
reproducibility of many empirical results, especially in the field of machine
learning. To solve this reproducibility crisis, we propose a theoretically
sound methodology to compare the overall performance of multiple algorithms
with stochastic returns. We exemplify our methodology in Deep RL. Indeed, the
performance of one execution of a Deep RL algorithm is random. Therefore,
several independent executions are needed to accurately evaluate the overall
performance. When comparing several RL algorithms, a major question is how many
executions must be made and how can we ensure that the results of such a
comparison are theoretically sound. When comparing several algorithms at once,
the error of each comparison may accumulate and must be taken into account with
a multiple tests procedure to preserve low error guarantees. We introduce
AdaStop, a new statistical test based on multiple group sequential tests. When
comparing algorithms, AdaStop adapts the number of executions to stop as early
as possible while ensuring that we have enough information to distinguish
algorithms that perform better than the others in a statistical significant
way. We prove theoretically and empirically that AdaStop has a low probability
of making a (family-wise) error. Finally, we illustrate the effectiveness of
AdaStop in multiple Deep RL use-cases, including toy examples and challenging
Mujoco environments. AdaStop is the first statistical test fitted to this sort
of comparisons: AdaStop is both a significant contribution to statistics, and a
major contribution to computational studies performed in reinforcement learning
and in other domains. To summarize our contribution, we introduce AdaStop, a
formally grounded statistical tool to let anyone answer the practical question:
“Is my algorithm the new state-of-the-art?”.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要