Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error
CoRR(2024)
摘要
Establishing robust policies is essential to counter attacks or disturbances
affecting deep reinforcement learning (DRL) agents. Recent studies explore
state-adversarial robustness and suggest the potential lack of an optimal
robust policy (ORP), posing challenges in setting strict robustness
constraints. This work further investigates ORP: At first, we introduce a
consistency assumption of policy (CAP) stating that optimal actions in the
Markov decision process remain consistent with minor perturbations, supported
by empirical and theoretical evidence. Building upon CAP, we crucially prove
the existence of a deterministic and stationary ORP that aligns with the
Bellman optimal policy. Furthermore, we illustrate the necessity of
L^∞-norm when minimizing Bellman error to attain ORP. This finding
clarifies the vulnerability of prior DRL algorithms that target the Bellman
optimal policy with L^1-norm and motivates us to train a Consistent
Adversarial Robust Deep Q-Network (CAR-DQN) by minimizing a surrogate of
Bellman Infinity-error. The top-tier performance of CAR-DQN across various
benchmarks validates its practical effectiveness and reinforces the soundness
of our theoretical analysis.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要