A practically implementable reinforcement learning control approach by leveraging offset-free model predictive control

COMPUTERS & CHEMICAL ENGINEERING(2024)

引用 0|浏览0
暂无评分
摘要
This work addresses the problem of designing an offset-free implementable reinforcement learning (RL) controller for nonlinear processes. RL-based controllers can update the control policy using observed data obtained online based on the controller-process interactions. This allows to alleviate the regular model maintenance step that is essential in advanced control techniques such as model predictive control (MPC). However, random explorations are required for an RL agent to find optimal state-action regions so as to finally reach the optimal policy. This is not implementable in practical situations due to safety concerns and economic objectives. To address this issue, a pre-training strategy is proposed to provide a secure platform for online implementations of the RL controllers. To this end, an offset-free MPC (representative industrial MPC) optimization problem is leveraged to train the RL agent offline. Having obtained similar performance to the offset-free MPC, the RL agent is utilized for online control to interact with the actual process. The efficacy of the proposed approach to handle nonlinearity and changes in plant operating conditions (due to unmeasured disturbances) are demonstrated through simulations on a chemical reactor example for a pH neutralization process. The results show that the proposed RL controller can significantly improve the oscillatory closed -loop responses, obtained by running the offset-free MPC due to the plant-model mismatch and unmeasured disturbances.
更多
查看译文
关键词
Reinforcement learning,Machine learning,Offset-free model predictive control,Process control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要