Uncertainty-Aware Transient Stability-Constrained Preventive Redispatch: A Distributional Reinforcement Learning Approach
CoRR(2024)
摘要
Transient stability-constrained preventive redispatch plays a crucial role in
ensuring power system security and stability. Since redispatch strategies need
to simultaneously satisfy complex transient constraints and the economic need,
model-based formulation and optimization become extremely challenging. In
addition, the increasing uncertainty and variability introduced by renewable
sources start to drive the system stability consideration from deterministic to
probabilistic, which further exaggerates the complexity. In this paper, a Graph
neural network guided Distributional Deep Reinforcement Learning (GD2RL) method
is proposed, for the first time, to solve the uncertainty-aware transient
stability-constrained preventive redispatch problem. First, a graph neural
network-based transient simulator is trained by supervised learning to
efficiently generate post-contingency rotor angle curves with the steady-state
and contingency as inputs, which serves as a feature extractor for operating
states and a surrogate time-domain simulator during the environment interaction
for reinforcement learning. Distributional deep reinforcement learning with
explicit uncertainty distribution of system operational conditions is then
applied to generate the redispatch strategy to balance the user-specified
probabilistic stability performance and economy preferences. The full
distribution of the post-control transient stability index is directly provided
as the output. Case studies on the modified New England 39-bus system validate
the proposed method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要