Off-policy Q-learning-based Output Feedback Fault-tolerant Tracking Control of Industrial Processes

2023 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS)(2023)

引用 0|浏览0
暂无评分
摘要
In this paper, a data-driven Q-learning output feedback algorithm independent of system parameter information is proposed to solve the control problem of industrial processes with actuator faults. Firstly, an extended model is obtained by introducing tracking error into system state and output respectively. Secondly, the Bellman equation and GARE equation are acquired in the process of constructing the performance index and analyzing its relationship with value function. Since the solution of GARE equation requires knowing the system matrix information, so the Q-function is then described and an algorithm combining off-policy Q-learning and Kronecker product is used to determine the optimal controller with measurable external signals only. And the algorithm is proved to be unbiased. Finally, simulation experiments on the injection molding process verify the effectiveness of the proposed algorithm.
更多
查看译文
关键词
off-policy Q-learning,actuator faults,fault-tolerant tracking control,output feedback,industrial processes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要