2004
DOI: 10.1109/tpwrs.2003.821457
|View full text |Cite
|
Sign up to set email alerts
|

Power Systems Stability Control: Reinforcement Learning Framework

Abstract: Abstract-In this paper we explore how a computational approach to learning from interactions, called Reinforcement Learning (RL), can be applied to control power systems. We describe some challenges in power system control and discuss how some of those challenges could be met by using these RL methods. The difficulties associated with their application to control power systems are described and discussed as well as strategies that can be adopted to overcome them. Two reinforcement learning modes are considered… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
107
0
3

Year Published

2005
2005
2021
2021

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 185 publications
(110 citation statements)
references
References 11 publications
0
107
0
3
Order By: Relevance
“…Over the last two decades, most of the research in this context has focused on the use of parametric function approximators, representing either some (state-action) value functions or parameterized policies, together with some stochastic gradient descent algorithms [7]- [10]. Even if some successes have been reported (e.g., [11]- [14]), these techniques have not yet moved from the academic to the real world as successfully as MPC techniques, which have already been largely adopted in practice [15].…”
mentioning
confidence: 99%
“…Over the last two decades, most of the research in this context has focused on the use of parametric function approximators, representing either some (state-action) value functions or parameterized policies, together with some stochastic gradient descent algorithms [7]- [10]. Even if some successes have been reported (e.g., [11]- [14]), these techniques have not yet moved from the academic to the real world as successfully as MPC techniques, which have already been largely adopted in practice [15].…”
mentioning
confidence: 99%
“…Power system components considered include: dynamic brake Ernst et al (2004); Glavic (2005), thyristor controlled series capacitor Ernst et al (2004Ernst et al ( , 2009, quadrature booster Li and Wu (1999), synchronous generators (all AGC related references), individual or aggregated loads Vandael et al (2015); Ruelens et al (2016), etc. If used as a multi-agent system, then additional state variables must be introduced to ensure convergence of these essentially distributed computation schemes, and an adapted variant of standard RL methods is often used (for example correlated equilibrium Q(λ) Yu et al (2012a)).…”
Section: Past and Recent Considerations Of Rl For Electric Power Systmentioning
confidence: 99%
“…In turn, the customers in DECENT carry out distributed negotiations through their agents, and neither look-ahead nor auctioning are needed . Grid stability is addressed in Ernst et al, 2002, Ernst et al, 2004, Hadidi et al, 2009, Pipattanasomporn et al, 2009and Vlachogiannis et al, 2004 in terms of various learning strategies where a central agency would be enabled to ensure (optimal) stability. This theme has been an open issue in practice so far, and the proposed solutions are certainly not scalable.…”
Section: Previous and Relatedmentioning
confidence: 99%