Fast Two-Time-Scale Stochastic Gradient Method with Applications in Reinforcement Learning
CoRR(2024)
摘要
Two-time-scale optimization is a framework introduced in Zeng et al. (2024)
that abstracts a range of policy evaluation and policy optimization problems in
reinforcement learning (RL). Akin to bi-level optimization under a particular
type of stochastic oracle, the two-time-scale optimization framework has an
upper level objective whose gradient evaluation depends on the solution of a
lower level problem, which is to find the root of a strongly monotone operator.
In this work, we propose a new method for solving two-time-scale optimization
that achieves significantly faster convergence than the prior arts. The key
idea of our approach is to leverage an averaging step to improve the estimates
of the operators in both lower and upper levels before using them to update the
decision variables. These additional averaging steps eliminate the direct
coupling between the main variables, enabling the accelerated performance of
our algorithm. We characterize the finite-time convergence rates of the
proposed algorithm under various conditions of the underlying objective
function, including strong convexity, convexity, Polyak-Lojasiewicz condition,
and general non-convexity. These rates significantly improve over the
best-known complexity of the standard two-time-scale stochastic approximation
algorithm. When applied to RL, we show how the proposed algorithm specializes
to novel online sample-based methods that surpass or match the performance of
the existing state of the art. Finally, we support our theoretical results with
numerical simulations in RL.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要