2015
DOI: 10.1002/nme.4883
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive training of local reduced bases for unsteady incompressible Navier–Stokes flows

Abstract: SUMMARYThis report presents a numerical study of reduced-order representations for simulating incompressible Navier-Stokes flows over a range of physical parameters. The reduced-order representations combine ideas of approximation for nonlinear terms, of local bases, and of least-squares residual minimization. To construct the local bases, temporal snapshots for different physical configurations are collected automatically until an error indicator is reduced below a user-specified tolerance. An adaptive time-i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 42 publications
0
9
0
Order By: Relevance
“…Note that the approximate solution incurs no error if the approximate solutions exactly satisfy the FOM O∆E (2) such that the residual is zero at all time instances. This also illustrates why the residual norm is often viewed as a useful error indicator for guiding greedy methods for snapshot collection [7,6,19,48,49] or trust-region optimization algorithms [52,50,51].…”
Section: A Posteriori Error Boundsmentioning
confidence: 94%
“…Note that the approximate solution incurs no error if the approximate solutions exactly satisfy the FOM O∆E (2) such that the residual is zero at all time instances. This also illustrates why the residual norm is often viewed as a useful error indicator for guiding greedy methods for snapshot collection [7,6,19,48,49] or trust-region optimization algorithms [52,50,51].…”
Section: A Posteriori Error Boundsmentioning
confidence: 94%
“…, µ K } to be selected either a priori guided by physical intuition, or by sampling techniques like random or latin hypercube (LHS) sampling (see, e.g., [53]) and sparse grids (see, e.g., [54]). However, the offline construction in Algorithm 4 can accommodate more general training techniques such as the greedy [55] and POD-greedy [56] algorithms, possibly combined with adaptivity [57,58], heuristic error indicators [52], and localized bases [59]. To this end, however, suitable a posteriori error estimates are needed.…”
Section: C) Extract Global Solution Matrix and Vector Basesmentioning
confidence: 99%
“…Many if not all of these a posteriori error indicators rely on the inexpensive computation of the norm of residuals associated with the HDM solution . As these residuals are of large dimension, most of the focus in the context of repeated analyses has been in the case of separable, affine parameter dependency of the HDM .…”
Section: Introductionmentioning
confidence: 99%
“…The two quantities are often found to be highly correlated, and as a result, the norm of the residual is often considered to monitor the convergence of an iterative procedure for solving a given set of linear or nonlinear equations. In the context of model reduction, the norm of the residual is used in greedy approaches as well as determining when a ROM loses accuracy in the context of PDE‐constrained optimization . Here, a linear model is constructed to estimate the true error using the residual norm ( a posteriori error indicator).…”
Section: Introductionmentioning
confidence: 99%