2019
DOI: 10.1016/j.enbuild.2019.03.024
|View full text |Cite
|
Sign up to set email alerts
|

A performance evaluation framework for building fault detection and diagnosis algorithms

Abstract: Fault detection and diagnosis (FDD) algorithms for building systems and equipment represent one of the most active areas of research and commercial product development in the buildings industry. However, far more e↵ort has gone into developing these algorithms than into assessing their performance. As a result, considerable uncertainties remain regarding the accuracy and e↵ectiveness of both research-grade FDD algorithms and commercial products-a state of a↵airs that has hindered the broad adoption of FDD tool… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(15 citation statements)
references
References 31 publications
1
10
0
Order By: Relevance
“…Most AFDD systems are customized for the customers' unique applications, which provides solutions that fit with building owners' needs but also increases overall costs and limits the broad application and usefulness of AFDD systems. These results support the need for a standardized fault taxonomy (Frank et al 2019) and reporting format for better uniformity across the industry.…”
Section: Introductionsupporting
confidence: 66%
See 1 more Smart Citation
“…Most AFDD systems are customized for the customers' unique applications, which provides solutions that fit with building owners' needs but also increases overall costs and limits the broad application and usefulness of AFDD systems. These results support the need for a standardized fault taxonomy (Frank et al 2019) and reporting format for better uniformity across the industry.…”
Section: Introductionsupporting
confidence: 66%
“…Different frameworks and fault definitions can lead to the same fault being classified in different categories and lead to different and inappropriate corrective actions. These potential conflicts support a need for well accepted fault definitions as well as a more unified understanding how faults should be characterized to provide consistency across AFDD companies (Frank et al 2019).…”
Section: Conclusion and Recommendationsmentioning
confidence: 99%
“…To complete step 3 of the process, for each input sample, a conditionbased convention was used to define the ground truth (faulted or fault free operational state). Detailed in Frank et al 2018, a condition-based convention defines a fault as the presence of an improper or undesired physical condition in a system or piece of equipment, for example, a stuck damper, or a leaking valve. This is in contrast to behavior-based (e.g.…”
Section: As-operated Fdd Benefits and Costsmentioning
confidence: 99%
“…There is a lack of standard methodology and datasets for evaluating the accuracy of FDD technologies that continuously analyze operational data streams from building automation systems and built-up as well as unitary HVAC systems. In response, we describe a previously developed methodology for evaluating the performance of FDD algorithms (Frank et al 2018(Frank et al , 2019aYuill and Braun 2013), and a newly curated initial test dataset of AHU system faults, with known ground-truth conditions. We've applied the evaluation methodology on three sample FDD algorithms, including two commercial tools, and an instantiation of National Institute of Standards and Technology's (NIST's) air-handling unit performance assessment (APAR) rules (House et al 2001) against the dataset to understand the types of performance insights that can be gained, priorities for further expanding the test dataset for maximum utility in evaluating FDD algorithm performance, and whether the test methodology is scalable and repeatable.…”
Section: Introductionmentioning
confidence: 99%
“…A persistent challenge has been the lack of common datasets and test methods to support the development of, and to benchmark the performance accuracy of FDD methods against one another. Prior work has made progress toward common test methods 6,7 , however test datasets remain a gap.…”
Section: Background and Summarymentioning
confidence: 99%