2006
DOI: 10.1016/j.enbuild.2005.06.002
|View full text |Cite
|
Sign up to set email alerts
|

Experimental analysis of simulated reinforcement learning control for active and passive building thermal storage inventory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
62
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(63 citation statements)
references
References 11 publications
1
62
0
Order By: Relevance
“…For example, Gregor. P et al [28,29] investigated the use of Q-learning for optimizing thermal energy storage systems in a building. They developed a building simulation platform utilizing MATLAB and EnergyPlus and investigated the use of the tabular Q-learning algorithm.…”
Section: Previous Workmentioning
confidence: 99%
“…For example, Gregor. P et al [28,29] investigated the use of Q-learning for optimizing thermal energy storage systems in a building. They developed a building simulation platform utilizing MATLAB and EnergyPlus and investigated the use of the tabular Q-learning algorithm.…”
Section: Previous Workmentioning
confidence: 99%
“…It is also concluded that this is achievable with high performance in terms of thermal comfort both in summer and winter for the houses in the project. In Liu and Henze (2006b) an experimental analysis of simulated learning control for active and passive building thermal storage is reported. In Liu and Henze (2006a) the theoretical foundation is presented and in Liu and Henze (2006b) the results and analysis are found.…”
Section: Measurementsmentioning
confidence: 99%
“…In Liu and Henze (2006b) an experimental analysis of simulated learning control for active and passive building thermal storage is reported. In Liu and Henze (2006a) the theoretical foundation is presented and in Liu and Henze (2006b) the results and analysis are found. The work was conducted at the Energy Resource Center Station in Iowa.…”
Section: Measurementsmentioning
confidence: 99%
“…Thus, a long time is necessary to finish the learning, or equipment to be controlled may not endure the trials if it is composed of mechanical parts, like a robot is. Hybrid learning solves this problem [9], [10]. An approach to hybrid learning first makes a control function using a non-linear control theory [11], [12], next the control function is approximated using linear function approximation, and the approximated control function is finally improved using reinforcement learning.…”
Section: Introductionmentioning
confidence: 99%