Incremental reinforcement learning and optimal output regulation under unmeasurable disturbances

Automatica(2024)

Cited 0|Views7
No score
Abstract
In this paper, we propose novel data-driven optimal dynamic controller design frameworks, via both state-feedback and output-feedback, for solving optimal output regulation problems of linear discrete-time systems subject to unknown dynamics and unmeasurable disturbances using reinforcement learning (RL). Fundamentally different from existing research on optimal output regulation problems and RL, the proposed procedures can determine both the optimal control gain and the optimal dynamic compensator simultaneously instead of presetting a non-optimal dynamic compensator. Moreover, we present incremental dataset-based RL algorithms to learn the optimal dynamic controllers that do not require the measurements of the external disturbance and the exostate during learning, which is of great practical importance. Besides, we show that the proposed incremental dataset-based learning methods are more robust to a class of measurement noises with arbitrary magnitudes than routine RL algorithms. Comprehensive simulation results validate the efficacy of our methodologies.
More
Translated text
Key words
Reinforcement learning,Optimal control,Output regulation,Incremental dataset
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined