Constrained Reinforcement Learning for Predictive Control in Real-Time Stochastic Dynamic Optimal Power Flow

arXiv (Cornell University)(2023)

引用 0|浏览0
暂无评分
摘要
Deep Reinforcement Learning (DRL) has become a popular method for solving control problems in power systems. Conventional DRL encourages the agent to explore various policies encoded in a neural network (NN) with the goal of maximizing the reward function. However, this approach can lead to infeasible solutions that violate physical constraints such as power flow equations, voltage limits, and dynamic constraints. Ensuring these constraints are met is crucial in power systems, as they are a safety critical infrastructure. To address this issue, existing DRL algorithms remedy the problem by projecting the actions onto the feasible set, which can result in sub-optimal solutions. This paper presents a novel primal-dual approach for learning optimal constrained DRL policies for dynamic optimal power flow problems, with the aim of controlling power generations and battery outputs. We also prove the convergence of the critic and actor networks. Our case studies on IEEE standard systems demonstrate the superiority of the proposed approach in dynamically adapting to the environment while maintaining safety constraints.
更多
查看译文
关键词
predictive control,reinforcement learning,power,real-time
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要