2016
DOI: 10.1016/j.applthermaleng.2016.06.149
|View full text |Cite
|
Sign up to set email alerts
|

Effect of the distribution of faults and operating conditions on AFDD performance evaluations

Abstract: Automated fault detection and diagnosis (AFDD) tools are used to identify degradation faults that reduce the performance and life of airconditioning equipment. A recent methodology has been developed to evaluate the performance of AFDD tools. The methodology involves feeding a library of input data to an AFDD protocol and categorizing the results. The current paper describes a study that has been conducted to assess the effect of using various input data sets in the evaluations. These input data sets include d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
11
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…Specific to fault diagnostics, while numerous research papers evaluate the performance of individual algorithms (Rossi and Braun 1997, Katipamula et al 1999, Ferrettu et al 2015 it is difficult to draw comparisons or understand the overall state of technology, as each study uses different datasets, test conditions, and metrics. A body of work by Yuill and Braun has explored these concerns, largely with a focus on handheld FDD devices for use with unitary systems (Yuill and Braun 2013, Yuill and Braun 2016, Yuill and Braun 2017. There is a lack of standard methodology and datasets for evaluating the accuracy of FDD technologies that continuously analyze operational data streams from building automation systems and built-up as well as unitary HVAC systems.…”
Section: Introductionmentioning
confidence: 99%
“…Specific to fault diagnostics, while numerous research papers evaluate the performance of individual algorithms (Rossi and Braun 1997, Katipamula et al 1999, Ferrettu et al 2015 it is difficult to draw comparisons or understand the overall state of technology, as each study uses different datasets, test conditions, and metrics. A body of work by Yuill and Braun has explored these concerns, largely with a focus on handheld FDD devices for use with unitary systems (Yuill and Braun 2013, Yuill and Braun 2016, Yuill and Braun 2017. There is a lack of standard methodology and datasets for evaluating the accuracy of FDD technologies that continuously analyze operational data streams from building automation systems and built-up as well as unitary HVAC systems.…”
Section: Introductionmentioning
confidence: 99%
“…Such a procedure would provide a trusted, standard method for validation and comparison of FDD tools at all stages of development, from early-stage research to mature commercial products. Given the wide variety of FDD use cases and competing techniques, establishing a standard evaluation methodology is a daunting challenge [22,23]. Significant progress has been made in establishing FDD test procedures and metrics within both the buildings sector [24,25] and other industries [26,27].…”
Section: Introductionmentioning
confidence: 99%
“…Building on the Annex 34 work, Reddy [24,31] and Yuill and Braun [23,25,32,33] have contributed significantly to the development of FDD evaluation methodologies for chillers and unitary equipment, respectively. Reddy [24] describes FDD algorithm performance evaluation as one component of a broader evaluation methodology that examines FDD tools' performance, cost, ease of implementation, ease of use, data requirements, training requirements, and applicability to the needs of a particular site or customer.…”
Section: Introductionmentioning
confidence: 99%
“…However, simulated faults can be used indirectly to train data-driven FDD algorithms [7]. Simulation also offers key theoretical advantages over experimental or field data in the evaluation of FDD algorithms: the ground truth is known with greater certainty; the cost of generating evaluation samples is low; evaluation samples can be generated for a wide variety of equipment, building types, environmental conditions, and so on; and evaluation data sets may be constructed without significant gaps or biases [10,12]. However, an accurate simulation of faults is not trivial and rigorous validation is critical if the models are to be trusted [4,10].…”
Section: Introductionmentioning
confidence: 99%