A practically implementable reinforcement learning-based process controller design

AICHE JOURNAL(2024)

引用 0|浏览0
暂无评分
摘要
The present article enables reinforcement learning (RL)-based controllers for process control applications. Existing instances of RL-based solutions have significant challenges for online implementation since the training process of an RL agent (controller) presently requires practically impossible number of online interactions between the agent and the environment (process). To address this challenge, we propose an implementable model-free RL method developed by leveraging industrially implemented model predictive control (MPC) calculations (often designed using a simple linear model identified via step tests). In the first step, MPC calculations are used to pretrain an RL agent that can mimic the MPC performance. Specifically, the MPC calculations are used to pretrain the actor, and the objective function is used to pretrain the critic(s). The pretrained RL agent is then employed within a model-free RL framework to control the process in a way that initially imitates MPC behavior (thus not compromising process performance and safety), but also continuously learns and improve its performance over the nominal linear MPC. The effectiveness of the proposed approach is illustrated through simulations on a chemical reactor example.
更多
查看译文
关键词
artificial neural networks,model predictive control,reinforcement learning,twin-delayed deep deterministic policy gradient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要