2019
DOI: 10.1109/jiot.2019.2941498
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning-Based Microgrid Energy Trading With a Reduced Power Plant Schedule

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(37 citation statements)
references
References 28 publications
0
35
0
Order By: Relevance
“…The agent manages the activities of the storage devices with a goal to maximize demand side cost savings. Other research in this direction are presented in [79], [80]. However, from the perspective of efficiency of the grid operations, Ren et al focused on the use of RL for load balancing in smart grids [81].…”
Section: E Reinforcement Learning Applications In Cpsmentioning
confidence: 99%
“…The agent manages the activities of the storage devices with a goal to maximize demand side cost savings. Other research in this direction are presented in [79], [80]. However, from the perspective of efficiency of the grid operations, Ren et al focused on the use of RL for load balancing in smart grids [81].…”
Section: E Reinforcement Learning Applications In Cpsmentioning
confidence: 99%
“…For a battery behind meter systems, the essential variables to solve the problem include the DG power output, the load profile, the measured battery storage system (BSS) instantaneous state of charge (SoC) and the forecasted day-ahead grid tariff profile. The costs considered include the grid power purchase cost, the cost of degradation of the BSS as defined in [28], [29], [39] and the cost of power purchase from auxiliary sources such as vehicleto-microgrid (V2M) [6], [40], [41]. In some cases, the grid tariff is constant like in [41] and in others, a stochastic tariff is considered as in [42], [43].…”
Section: Mathematical Formulationsmentioning
confidence: 99%
“…The authors used convolutional neural networks to learn a general policy for scheduling the storage under unpredictable demand and generation environment. Lu et al, [29] used the DQN strategy for energy trading between a microgrid and a power plant and achieved a 22. 3% improvement in self-consumption of the MG generated power.…”
Section: E Deep Q-networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep reinforcement learning is used in [29] to reach the policy that provides the commerce of energy between microgrids and attained 12.7% reduction in power plant scheduled for the proposed system model. The state is chosen to describe the current battery level, the predicted production of renewable energy, and the forecasted demand.…”
Section: Related Workmentioning
confidence: 99%