2014
DOI: 10.1007/s00158-014-1183-y
|View full text |Cite
|
Sign up to set email alerts
|

Design optimization using hyper-reduced-order models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
65
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 132 publications
(65 citation statements)
references
References 40 publications
0
65
0
Order By: Relevance
“…Whilst the model typically has a horizontal resolution of 2.8 • × 2.8 • , with 60 vertical levels up to 0.1 hPa, the high computational burden of the variational framework requires that a coarser resolution of 5.6 • × 5.6 • , with 31 vertical levels up to 10 hPa is used whilst testing the inverse model. Previous studies have attempted to alleviate the computational burden of the data assimilation or inversion process using reduced versions of non-linear models (Amsallem et al, 2013;Stefanescu et al, 2014), or with incremental optimisation using low-resolution linearised models (Courtier et al, 1994). Further study is necessary to explore the potential for these methods with the TOMCAT model.…”
Section: The Tomcat Ctmmentioning
confidence: 99%
“…Whilst the model typically has a horizontal resolution of 2.8 • × 2.8 • , with 60 vertical levels up to 0.1 hPa, the high computational burden of the variational framework requires that a coarser resolution of 5.6 • × 5.6 • , with 31 vertical levels up to 10 hPa is used whilst testing the inverse model. Previous studies have attempted to alleviate the computational burden of the data assimilation or inversion process using reduced versions of non-linear models (Amsallem et al, 2013;Stefanescu et al, 2014), or with incremental optimisation using low-resolution linearised models (Courtier et al, 1994). Further study is necessary to explore the potential for these methods with the TOMCAT model.…”
Section: The Tomcat Ctmmentioning
confidence: 99%
“…Nevertheless, for optimization with largescale nonlinear computational models, the introduction of parametric reduced-order models are often necessary (see for instance [3,24]). Concerning the algorithms for solving optimization problems under uncertainties, many methods have been proposed in the literature such as the gradientbased learning that is adapted to convex problems [48,89], the global search algorithms such as the stochastic algorithms, the genetic algorithm, and the evolutionary algorithms [14,46].…”
Section: Algorithms For Solving Optimization Problems Under Uncertainmentioning
confidence: 99%
“…To achieve this, we will consider hyper-reduction [2,6]. The hyper-reduction approaches we consider rely on the same paradigm of expansion through a basis as we use for the solution state itself.…”
Section: Hyper-reduction For Galerkin Projectionmentioning
confidence: 99%
“…Due to the non-polynomial nonlinearities that arise in standard roughness parameterizations and stabilized FEM approximations, further reduction of the nonlinearities in the SWE are required to break dependence on the fine-scale dimension [2]. We evaluate two types of socalled hyper-reduction for our reduced models: Discrete Empirical Interpolation (DEIM) [5,13] and gappy POD [14,23].…”
Section: Introductionmentioning
confidence: 99%