2014
DOI: 10.1080/15376494.2013.828819
|View full text |Cite
|
Sign up to set email alerts
|

A Resource Allocation Framework for Experiment-Based Validation of Numerical Models

Abstract: In experiment-based validation, uncertainties and systematic biases in model predictions are reduced by either increasing the amount of experimental evidence available for model calibration-thereby mitigating prediction uncertainty-or increasing the rigor in the definition of physics and/or engineering principles-thereby mitigating prediction bias. Hence, decision makers must regularly choose between either allocating resources for experimentation or further code development. The authors propose a decision-mak… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(18 citation statements)
references
References 33 publications
0
18
0
Order By: Relevance
“…One of the best-known implementations of this approach is that of Kennedy and O'Hagan [2], where Gaussian Processes (GPs) were used to create statistical models of model discrepancy (as well as emulators, for the cases where simulations of the system of interest are computationally expensive). This idea has contributed to many works in the field of model validation, including generalised validation frameworks [4][5] [6], model validation metrics that consider model discrepancy [1], metrics to establish when sufficient experimental data has been gathered in the model validation process [7] and approaches for resource allocation (between code development and experimental testing [8] or the development of substructures within a model [9]).…”
Section: Literature Reviewmentioning
confidence: 99%
“…One of the best-known implementations of this approach is that of Kennedy and O'Hagan [2], where Gaussian Processes (GPs) were used to create statistical models of model discrepancy (as well as emulators, for the cases where simulations of the system of interest are computationally expensive). This idea has contributed to many works in the field of model validation, including generalised validation frameworks [4][5] [6], model validation metrics that consider model discrepancy [1], metrics to establish when sufficient experimental data has been gathered in the model validation process [7] and approaches for resource allocation (between code development and experimental testing [8] or the development of substructures within a model [9]).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The metric is transformed to range between zero and infinity with zero representing the poorest possible coverage and infinity representing perfect coverage. This transformation is accomplished using the following functional form: The PMI has been established as a quantitative and objective metric to evaluate predictive capabilities of numerical models and has been applied to for instance the Preston-Tonks-Wallace model of plastic deformation [3], the Viscoplastic Self-Consistent (VPSC) material model [21], and the nuclear fuel performance code, LIFEIV [22]. Recently modified by Stull et al [9], the PMI includes four attributes: coverage of the domain,  c , robustness to model parameter uncertainty,  S , scaled discrepancy bias,  S , and model complexity, N K, as described in Eq.…”
Section: Transforming the Proposed Coverage Metric Into An Intuitive Indicatormentioning
confidence: 99%
“…The governing equation is written as [23]:  is a normalization factor [23]. A large number of parameters are required to completely describe the crystallographic textures using weights associated with a partition of 3-D orientation space [21]. However, for calibration and validation purposes, the final textures can be characterized by two components: (i) intensity associated with a retained (001) cube texture and (ii) intensity associated with a (101) compression texture.…”
Section: Vpsc Materials Modelmentioning
confidence: 99%
“…The EIPS and EDIST are reportedly the most effective criteria for improving two specific attributes of the PMI metric: discrepancy and coverage (Atamturktur et al 2013). The VPSC code for the application of interest has 12 uncertain parameters, which are calibrated against the physical experiments on the 5182 aluminum alloy in Atamturktur et al (2014b). In this study, the synthetic experimental data for both the starting set of experiments and the BSD-selected experimental batches were generated using the posterior mean values of the input parameters (Table 2) reported in Atamturktur et al (2014b).…”
Section: Proof-of-concept Application: Vpsc Materials Modelmentioning
confidence: 99%
“…The VPSC code for the application of interest has 12 uncertain parameters, which are calibrated against the physical experiments on the 5182 aluminum alloy in Atamturktur et al (2014b). In this study, the synthetic experimental data for both the starting set of experiments and the BSD-selected experimental batches were generated using the posterior mean values of the input parameters (Table 2) reported in Atamturktur et al (2014b). The operational domain was defined by the range of control parameters, strain rate, and temperature given in Table 2.…”
Section: Proof-of-concept Application: Vpsc Materials Modelmentioning
confidence: 99%