2004
DOI: 10.1109/tpwrs.2004.831259
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning for Reactive Power Control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
51
0
3

Year Published

2010
2010
2021
2021

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 117 publications
(54 citation statements)
references
References 15 publications
0
51
0
3
Order By: Relevance
“…In fact, in order to highlight the importance of secure and stable operation in power grid, it is also necessary to consider the reactive power optimization with the voltage inequality constraints [12,20]. Here, a new voltage performance index V measuring the voltage deviation of the system is introduced into the OPF objective function as follow:…”
Section: Opf Mathematical Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In fact, in order to highlight the importance of secure and stable operation in power grid, it is also necessary to consider the reactive power optimization with the voltage inequality constraints [12,20]. Here, a new voltage performance index V measuring the voltage deviation of the system is introduced into the OPF objective function as follow:…”
Section: Opf Mathematical Modelmentioning
confidence: 99%
“…In recent years, Reinforcement Learning techniques have been proposed for reactive power control [12], economic dispatch [13], power market [14], etc. With the rapid development of multi-agent system (MAS) technology [15,16], Distributed Reinforcement Learning (DRL) [17], as an extension of Reinforcement Learning, plays an important role as the core enabling technology to achieve the MAS's goal.…”
Section: Introductionmentioning
confidence: 99%
“…In turn, the customers in DECENT carry out distributed negotiations through their agents, and neither look-ahead nor auctioning are needed . Grid stability is addressed in Ernst et al, 2002, Ernst et al, 2004, Hadidi et al, 2009, Pipattanasomporn et al, 2009and Vlachogiannis et al, 2004 in terms of various learning strategies where a central agency would be enabled to ensure (optimal) stability. This theme has been an open issue in practice so far, and the proposed solutions are certainly not scalable.…”
Section: Previous and Relatedmentioning
confidence: 99%
“…It only needs a reward function to evaluate the quality of a solution instead of complicated mathematical operations. Finally, it has the ability to escape local minima because it performs stochastic optimization [29][30][31][32].…”
Section: Introductionmentioning
confidence: 99%