Energy management of a microgrid considering nonlinear losses in batteries through Deep Reinforcement Learning

Applied Energy(2024)

Cited 0|Views2
No score
Abstract
The massive deployment of microgrids could play a significant role in achieving decarbonization of the electric sector amid the ongoing energy transition. The effective operation of these microgrids requires an Energy Management System (EMS), which establishes control set-points for all dispatchable components. EMSs can be formulated as classical optimization problems or as Partially-Observable Markov Decision Processes (POMDPs). Recently, Deep Reinforcement Learning (DRL) algorithms have been employed to solve the latter, gaining popularity in recent years. Since DRL methods promise to deal effectively with nonlinear dynamics, this paper examines the Twin-Delayed Deep Deterministic Policy Gradient (TD3) performance – a state-of-the-art method in DRL – for the EMS of a microgrid that includes nonlinear battery losses. Furthermore, the classical EMS-microgrid interaction is improved by refining the behavior of the underlying control system to obtain reliable results. The performance of this novel approach has been tested on two distinct microgrids – a residential one and a larger-scale grid – with a satisfactory outcome beyond reducing operational costs. Findings demonstrate the intrinsic potential of DRL-based algorithms for enhancing energy management and driving more efficient power systems.
More
Translated text
Key words
Deep Reinforcement Learning,Energy management system,Energy savings,Isolated microgrid,Nonlinear battery model
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined