Back to Search Start Over

A Review of Safe Reinforcement Learning Methods for Modern Power Systems

Authors :
Su, Tong
Wu, Tong
Zhao, Junbo
Scaglione, Anna
Xie, Le
Publication Year :
2024

Abstract

Due to the availability of more comprehensive measurement data in modern power systems, there has been significant interest in developing and applying reinforcement learning (RL) methods for operation and control. Conventional RL training is based on trial-and-error and reward feedback interaction with either a model-based simulated environment or a data-driven and model-free simulation environment. These methods often lead to the exploration of actions in unsafe regions of operation and, after training, the execution of unsafe actions when the RL policies are deployed in real power systems. A large body of literature has proposed safe RL strategies to prevent unsafe training policies. In power systems, safe RL represents a class of RL algorithms that can ensure or promote the safety of power system operations by executing safe actions while optimizing the objective function. While different papers handle the safety constraints differently, the overarching goal of safe RL methods is to determine how to train policies to satisfy safety constraints while maximizing rewards. This paper provides a comprehensive review of safe RL techniques and their applications in different power system operations and control, including optimal power generation dispatch, voltage control, stability control, electric vehicle (EV) charging control, buildings' energy management, electricity market, system restoration, and unit commitment and reserve scheduling. Additionally, the paper discusses benchmarks, challenges, and future directions for safe RL research in power systems.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.00304
Document Type :
Working Paper