An Observer-Based Reinforcement Learning Solution for Model-Following Problems

2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC(2023)

Cited 0|Views7
No score
Abstract
This paper introduces a novel model-free solution for a multi-objective model-following control problem, utilizing an observer-based adaptive learning approach. The goal is to regulate model-following error dynamics and optimize process variables simultaneously. Integral reinforcement learning is employed to adapt three key strategies, including observation, closed-loop stabilization, and reference trajectory tracking. Implementation uses an approximate projection estimation method under mild conditions on learning parameters.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined