2000
DOI: 10.1007/3-540-45372-5_9
|View full text |Cite
|
Sign up to set email alerts
|

Application of Reinforcement Learning to Electrical Power System Closed-Loop Emergency Control

Abstract: This paper investigates the use of reinforcement learning in electric power system emergency control. The approach consists of using numerical simulations together with on-policy Monte Carlo control to determine a discrete switching control law to trip generators so as to avoid loss of synchronism. The proposed approach is tested on a model of a real large scale power system and results are compared with a quasioptimal control law designed by a brute force approach for this system.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2004
2004
2023
2023

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…The earliest research reports date back to 1999 and 2000 [19,22,23]. In [19] the use of RL in electric power system closed-loop emergency control, were investigated.…”
Section: Results Under Cyclic Power Demand Variationsmentioning
confidence: 99%
See 2 more Smart Citations
“…The earliest research reports date back to 1999 and 2000 [19,22,23]. In [19] the use of RL in electric power system closed-loop emergency control, were investigated.…”
Section: Results Under Cyclic Power Demand Variationsmentioning
confidence: 99%
“…RELATED WORK The application of RL algorithms to power system stability control is still in its infancy. Considerable research efforts have been done at the University of Liège [16,[19][20][21] and this paper is a result of those efforts.…”
Section: Results Under Cyclic Power Demand Variationsmentioning
confidence: 99%
See 1 more Smart Citation
“…While the learning stage refers to the usual RL implementation, the execution stage deploy the knowledge acquired from the learning stage for decision making. As TSA crisis can be considered as a widearea control systems' (WAC) crisis, Druet et al [112] investigated the deployment of RL using Monte Carlo control to define the switching control law for tripping generators in order to avoid loss of synchronism. However, due to scalability challenges, traditional RL algorithms struggle especially with regards to large scale power system.…”
Section: ) Review Of Rl and Drl Approaches To Tsamentioning
confidence: 99%
“…RL methods can solve sequential decisionmaking problems in real time [10]. The last two decades have seen increasing efforts to apply conventional RL methods, such as Q-learning and fitted Q-iteration [10], in various decisionmaking and control problems in power systems; these range from demand response [11], energy management, and automatic generation control to transient stability and emergency control [9], [12], [13]. Due to scalability issues, applications of conventional RL methods are mainly focusing on problems with low-dimensional state and action spaces.…”
Section: Introductionmentioning
confidence: 99%