2016
DOI: 10.1016/j.ress.2015.10.003
|View full text |Cite
|
Sign up to set email alerts
|

Separation of aleatory and epistemic uncertainty in probabilistic model validation

Abstract: This paper investigates model validation under a variety of different data scenarios and clarifies how different validation metrics may be appropriate for different scenarios. In the presence of multiple uncertainty sources, model validation metrics that compare the distributions of model prediction and observation are considered. Both ensemble validation and point-by-point approaches are discussed, and it is shown how applying the model reliability metric point-by-point enables the separation of contributions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(28 citation statements)
references
References 38 publications
0
27
0
1
Order By: Relevance
“…Since the complexity of computational systems, we may not take the validation result under single input condition as the final credibility of the simulation system. Literature (Mullins, 2015) provides an approach to integrate the model validation results from multiple simulation scenarios, which is defined by equation (6).…”
Section: Credibility Quantification Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the complexity of computational systems, we may not take the validation result under single input condition as the final credibility of the simulation system. Literature (Mullins, 2015) provides an approach to integrate the model validation results from multiple simulation scenarios, which is defined by equation (6).…”
Section: Credibility Quantification Methodsmentioning
confidence: 99%
“…To resolve the thermal challenge problem suggested in reference (Roy, 2011), (Ferson, 2008) proposed an area metric, which takes the integral over the area difference between the cumulative distribution function(CDF) of simulation data and the empirical CDF of the measured samples as the disagreement between the simulation model and real-world system (Li, 2014;Sankararaman, 2011). Mullins classified the data scenarios with aleatory and epistemic uncertainty and studied how different validation metrics may be appropriate for varies data samples (Mullins, 2015). Literature (Zhang, 2011) provided a group AHP method to evaluate the credibility of complex simulation system, in which Hadamard convex combination is used to aggregate the judgement matrices constructed by different assessment experts.…”
Section: Introductionmentioning
confidence: 99%
“…Causes of aleatory uncertainty are biological variability (Edelman and Gally, 2001) or regulation (Marder and Goaillard, 2006). The importance of distinguishing between aleatory and epistemic uncertainties has evoked some debate (Hora, 1996;Ferson and Ginzburg, 1996;Oberkampf et al, 2002;Ferson et al, 2004;Kiureghian and Ditlevsen, 2009;Mullins et al, 2016), but the distinction is at least important for how to interpret the results of an uncertainty quantification. Due to inherent variability, the parameters do not have true fixed values, but rather distributions of possible values.…”
Section: Applicability Of Uncertainpymentioning
confidence: 99%
“…Each metric is designed to compare features of a model-data pair to quantify validation: square error compares the difference in the data and model values in a point to point or interval fashion [23], the reliability metric [24] and the probability of agreement [25] compare continuous model outputs and data expectation values (the model reliability metric was extended past expectation values in [26]), the frequentist validation metric [27,28] and statistical hypothesis testing compare data and model test statistics, the area metric compares the cumulative distribution of the model to the estimated cumulative distribution of the data [15,16,17,18,19,20], probability density function (pdf) comparison metrics such as the KL Divergence that represent "closeness" between pdfs, and Bayesian model testing compares the posterior probability that each model would correctly output the observed data [12,13,29,30,31,32,33]. A detailed review of the majority of these metrics may be found in [34,35,36,37] and the references therein. In particular, [35] is an up to date review that considers many validation metrics in the cases of data and model certainty, data uncertainty and model certainty, and data and model uncertainty.…”
Section: Introductionmentioning
confidence: 99%