2022
DOI: 10.1016/j.apenergy.2022.119163
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight actor-critic generative adversarial networks for real-time smart generation control of microgrids

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 52 publications
0
8
0
Order By: Relevance
“…Although optimal scheduling problems using DQN can solve the problem of highdimensional observation space, the method based on DQN can only deal with discrete action spaces. To solve dispatch problems with continuous-valued state and action variables, many scholars have begun to focus on DRL algorithms based on policy gradient, including deep deterministic policy gradient (DDPG) [50], actor-critic (AC) [47] and its variants, such as advantage actor-critic (A2C) [48], asynchronous advantage actor-critic (A3C) [49], and soft actor-critic (SAC) [56]. Nevertheless, parameter updates should be avoided to improve the stability of the DRL-agent training.…”
Section: Deep Reinforcement Learning Methodsmentioning
confidence: 99%
“…Although optimal scheduling problems using DQN can solve the problem of highdimensional observation space, the method based on DQN can only deal with discrete action spaces. To solve dispatch problems with continuous-valued state and action variables, many scholars have begun to focus on DRL algorithms based on policy gradient, including deep deterministic policy gradient (DDPG) [50], actor-critic (AC) [47] and its variants, such as advantage actor-critic (A2C) [48], asynchronous advantage actor-critic (A3C) [49], and soft actor-critic (SAC) [56]. Nevertheless, parameter updates should be avoided to improve the stability of the DRL-agent training.…”
Section: Deep Reinforcement Learning Methodsmentioning
confidence: 99%
“…It is a technique based on a program that learns from data and that works similarly to the way the human brain works. The common neuronal networks are feedforward neural networks (FFNNs), 17 convolutional neural networks (CNNs), 18 recurrent neural networks (RNNs), 19 deep belief networks (DBNs), 20 generative adversarial networks (GANs), 21 and spiking neural networks (SNNs). 22 This study uses a back propagation (BP) neural network, which is one of the FFNNs.…”
Section: Composition Of Neural Network Prediction Modelmentioning
confidence: 99%
“…Currently, a data-driven approach cannot avoid involving deep learning methods [9,10]. Deep learning can be classifed as convolutional neural networks [8], deep neural networks [11], deep reinforcement learning [12], and deep forest algorithms [13]. Deep learning can in turn be classifed as classifcation algorithms, prediction algorithms, and control algorithms [14].…”
Section: Introductionmentioning
confidence: 99%
“…Recently, numerous deep reinforcement technologies have been combined to achieve better control performances in more complex scenarios. For example, traditional controllers + deep reinforcement learning [4], modal decomposition + generative adversarial networks [12], and twindelayed DDPG + DDPG [20] are combined to address the frequency control problems of novel power systems; Markov chain and isoprobabilistic transformation are combined for capacitor planning [21]. Overall, the primary contributions of this work are summarized as follows:…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation