1995
DOI: 10.2172/29432
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating prediction uncertainty

Abstract: The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumptio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
60
0

Year Published

1999
1999
2016
2016

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 97 publications
(61 citation statements)
references
References 25 publications
1
60
0
Order By: Relevance
“…McKay [4] proposed a more efficient estimation method based on the use of a single replicated Latin hypercube sampling (r-LHS) design for all K inputs. It should be noted that even with this efficiency improvement the main effect analysis is still very expensive requiring a substantial number (for example, thousands) of model evaluations.…”
Section: Main Effect Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…McKay [4] proposed a more efficient estimation method based on the use of a single replicated Latin hypercube sampling (r-LHS) design for all K inputs. It should be noted that even with this efficiency improvement the main effect analysis is still very expensive requiring a substantial number (for example, thousands) of model evaluations.…”
Section: Main Effect Analysismentioning
confidence: 99%
“…This paper focuses on efficient and accurate methods for computing the first-and second-order sensitivity indices. Specifically, McKay's [4] main effect analysis is an efficient method for computing the first-order sensitivity indices. However, a difficulty when applying this method is the determination of a suitable sample size to achieve sufficient accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…Secondly, it can cope with group of random inputs which is the key of our methodology to estimate the influence of dynamic inputs. Note that, although Quasi Monte Carlo sampling technique is known to have a better coverage of the input space [14,23], LHS is preferred here because we have to deal with a very high number of inputs (thousands) and the sampling design proposed in [22] (similar to rLHS [24]) is suited to LHS. Then, a second input LH sample Ω 2 is generated from Ω 1 by arbitrarily defining an independent rank matrix R 2 of size N s ⇥ d and setting,…”
Section: Estimating the Sensitivity Indices With Two Lhs Samplesmentioning
confidence: 99%
“…Note that, the new Ω 2 is also an LH sample and this column-wise permutation trick is equivalent to the replicated LH sampling [24]. After running the model with this new sample, a second model response vector…”
Section: Estimating the Sensitivity Indices With Two Lhs Samplesmentioning
confidence: 99%
See 1 more Smart Citation