2021
DOI: 10.48550/arxiv.2104.09785
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model-predictive control and reinforcement learning in multi-energy system case studies

Abstract: Model-predictive-control (MPC) offers an optimal control technique to establish and ensure that the total operation cost of multi-energy systems remains at a minimum while fulfilling all system constraints. However, this method presumes an adequate model of the underlying system dynamics, which is prone to modelling errors and is not necessarily adaptive. This has an associated initial and ongoing project-specific engineering cost. In this paper, we present an on-and off-policy multi-objective reinforcement l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…In [108], the management of IESs with integrated demand response is modeled as a Stackelberg game, and an actor-critic scheme is developed for the energy provider to adjust pricing and power dispatching strategies to cope with unknown private parameters of users. Extensive case studies are conducted in [109] to compare the performance of a twin delayed DDPG scheme against a benchmark linear model-predictive-control method, which empirically show that RL is a viable optimal control technique for IES management and can outperform conventional approaches.…”
Section: Energy Managementmentioning
confidence: 99%
“…In [108], the management of IESs with integrated demand response is modeled as a Stackelberg game, and an actor-critic scheme is developed for the energy provider to adjust pricing and power dispatching strategies to cope with unknown private parameters of users. Extensive case studies are conducted in [109] to compare the performance of a twin delayed DDPG scheme against a benchmark linear model-predictive-control method, which empirically show that RL is a viable optimal control technique for IES management and can outperform conventional approaches.…”
Section: Energy Managementmentioning
confidence: 99%
“…Works such as [7,23] have studied RL-based controllers in the context of building control and show that such RL controllers can lead to 5 − 12% energy savings compared to the existing rule-based controllers. Additionally, [24] compares the performance of MPC and RL controllers to show that RL controllers are able to outperform a linear MPC-based controller for two different test scenarios. Though these works indicate promising results for RL-based controllers, they also highlight existing challenges in real-world deployment of RL.…”
Section: Building Control and Modelingmentioning
confidence: 99%
“…Works such as [18,19] have studied RL-based controllers in the context of building control using different RL algorithms such as deep Q-networks, fitted Q-iteration and show that such RL controllers can lead to 5 − 12% energy savings compared to the existing rule-based controllers. Additionally, [20] compares the performance of MPC and RL controllers. The authors show that RL controllers are able to outperform a linear MPC-based controller for two different test scenarios.…”
Section: Building Control and Modelingmentioning
confidence: 99%