谷歌浏览器插件
订阅小程序
在清言上使用

Final Iteration Convergence Bound of Q-Learning: Switching System Approach

IEEE Transactions on Automatic Control(2024)

引用 0|浏览9
暂无评分
摘要
Q-learning is known as one of the fundamental reinforcement learning (RL) algorithms. Its convergence has been the focus of extensive research over the past several decades. Recently, a new finite-time error bound and analysis for Q-learning was introduced using a switching system framework. This approach views the dynamics of Q-learning as a discrete-time stochastic switching system. The prior study established a finite-time error bound on the averaged iterates using Lyapunov functions, offering further insights into Q-learning. While valuable, the analysis focuses on error bounds of the averaged iterate, which comes with the inherent disadvantages: it necessitates extra averaging steps, which can decelerate the convergence rate. Moreover, the final iterate, being the original format of Q-learning, is more commonly used and is often regarded as a more intuitive and natural form in the majority of iterative algorithms. In this paper, we present a finite-time error bound on the final iterate of Q-learning based on the switching system framework. The proposed error bounds have different features compared to the previous works, and cover different scenarios. Finally, we expect that the proposed results provide additional insights on Q-learning via connections with discrete-time switching systems, and can potentially present a new template for finite-time analysis of more general RL algorithms.
更多
查看译文
关键词
Reinforcement learning,Q-learning,switching system,convergence,finite-time analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要