2021
DOI: 10.1016/j.energy.2021.121035
|View full text |Cite
|
Sign up to set email alerts
|

Lifelong control of off-grid microgrid with model-based reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Deep learning-based control systems are also very popular for off-grid scenarios, as off-grid energy management systems are gaining increasing attention to provide sustainable and reliable energy services. In References [45] and [46], the authors developed algorithms based on deep reinforcement to deal with the uncertain and stochastic nature of renewable energy sources.…”
Section: Machinementioning
confidence: 99%
“…Deep learning-based control systems are also very popular for off-grid scenarios, as off-grid energy management systems are gaining increasing attention to provide sustainable and reliable energy services. In References [45] and [46], the authors developed algorithms based on deep reinforcement to deal with the uncertain and stochastic nature of renewable energy sources.…”
Section: Machinementioning
confidence: 99%
“…RL learns the optimal control policies from interactions with the environment and selects the actions based on a given reward mechanism [27]. The control policies can be adaptively adjusted based on the feedback from the environment and show an advantage in adapting to the stochastic environment changes [31]. Table 1 summarizes different applications of RL algorithms in managing energy system operations.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…However, real-world data with stochastic patterns generally lead to an unstable training process. Model-based algorithms take actions obtained based on a simple environment model, leading to low requirements for data and better generalization of convergence [31]. Most researchers have been attempting to optimize objectives (e.g., user cost reduction, renewable energy consumption, user comfort, and load flexibility) by implementing a single approach, such as the rule-based method, Q-learning algorithms, DQN, and other regulation strategies.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Kolodziejczyk et al [141] model the maximum charging and discharging power as non-linear functions of SoC. Totaro et al [95] model how charging and discharging efficiencies as well as the battery storage capacity degrade over time. For problem formulations that permit selling battery energy to the grid, the inverter efficiency as a function of discharging power is a significant factor taken into account only in the minority of the works [23].…”
Section: A Capturing Battery Losses In the Reinforcement Learning Env...mentioning
confidence: 99%
“…In the case of isolated microgrids, purchases from an external electricity market are either not possible [168], [95] or a last resort to complement local fossil-fuel based emergency generation [96]. Phan & Lai [169] and Zhang et al [96] note that the trend towards a decentralized electric power system should in some seashore regions be complemented with a move to decentralized freshwater production, so a desalination plant is added to the microgrid.…”
Section: B) Isolatedmentioning
confidence: 99%