2014
DOI: 10.1002/nme.4759
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive and efficient greedy procedure for the optimal training of parametric reduced‐order models

Abstract: SUMMARYAn adaptive and efficient approach for constructing reduced-order models (ROMs) that are robust to changes in parameters is developed. The approach is based on a greedy sampling of the underlying high-dimensional model (HDM) together with an efficient procedure for exploring the configuration space and identifying parameters for which the error is likely to be high. Because this exploration is based on a surrogate model for an error indicator, it is amenable to a fast training phase. Furthermore, a mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
167
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 142 publications
(167 citation statements)
references
References 49 publications
(160 reference statements)
0
167
0
Order By: Relevance
“…However, one can use sparse grids or other adaptive sampling techniques [15,27,47] to avoid the exponential growth. Nevertheless, suffering from curse of dimensionality is a common problem with almost all PMOR methods and algorithms.…”
Section: Remark 72mentioning
confidence: 99%
“…However, one can use sparse grids or other adaptive sampling techniques [15,27,47] to avoid the exponential growth. Nevertheless, suffering from curse of dimensionality is a common problem with almost all PMOR methods and algorithms.…”
Section: Remark 72mentioning
confidence: 99%
“…In addition to this single deterministic system, parameters can also enter the system and hence the sampling of snapshots does typically involve both the choice of time instances and parameter values. The choice of sampling parameters is crucial to the definition of an accurate parametric ROM but is not the focus of the present paper and the reader is referred to the following references [14][15][16]. The dynamical system, suitable solver and sampling procedure are assumed to be given in the present work.…”
Section: Data Approximation and Nonlinear Mor With Local Basesmentioning
confidence: 99%
“…ROM uncertainty is indeed quite similar to the so-called model form uncertainty, arising whenever a limited understanding of the modeled process is available [18,17]. Concerning the modeling of the reduction error itself, different approaches to approximation have been considered very recently: the so-called approximation error model [1,26], Gaussian process (GP)-based calibration (or GP machine learning) [39], interpolant-based error indicators [33], and regression-based error indicators [13]. In all of these cases, a statistical representation of the ROM error through calibration experiments is used to model the difference between the full-order and the lower-order model.…”
Section: Introductionmentioning
confidence: 99%
“…In contrast to the approaches taken in [13,33], where the objective was to construct reduction error models that could be used to train or adapt the ROM accordingly in the case that no other error estimator was readily available, our proposed approach is instead aimed at the accurate solution of inverse problems using ROMs, and may use existing ROM error bounds as additional information, if available. In previous work [21] we have shown that using low-fidelity ROMs as surrogates for solving inverse problems can lead to biased and overly optimistic posterior distributions.…”
Section: Introductionmentioning
confidence: 99%