2022
DOI: 10.1016/j.cma.2021.114181
|View full text |Cite
|
Sign up to set email alerts
|

POD-DL-ROM: Enhancing deep learning-based reduced order models for nonlinear parametrized PDEs by proper orthogonal decomposition

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
97
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

4
3

Authors

Journals

citations
Cited by 163 publications
(99 citation statements)
references
References 28 publications
2
97
0
Order By: Relevance
“…Note that the idea of coupling the proper orthogonal decomposition with machine learning approaches was already investigated, even for time-parameter dependent systems (parametrized PDEs), see e.g. [10]. However, utilizing the shifted POD variant is, up to the authors knowledge, a novelty.…”
Section: Podiann and Spodiann Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that the idea of coupling the proper orthogonal decomposition with machine learning approaches was already investigated, even for time-parameter dependent systems (parametrized PDEs), see e.g. [10]. However, utilizing the shifted POD variant is, up to the authors knowledge, a novelty.…”
Section: Podiann and Spodiann Frameworkmentioning
confidence: 99%
“…While sPOD exhibits great promise in approximation of data originating from transport-dominated systems via just a few spatial modes in the respective co-moving frames, it practically disallows for utilization of the standard projection-based approaches to the reduced order model construction [2]. In order to allow for practical application of sPOD in MOR, we propose to replace the projection framework by an interpolation between the discrete values {η r } ℓ r=1 utilizing an artificial neural network (ANN), an idea usable for parametrized PDEs and similar to the one presented in [9] and in [10].…”
Section: Introductionmentioning
confidence: 99%
“…This work extends the POD-DL-ROM framework [30] in two directions: first, it replaces the CAE architecture of POD-DL-ROM with a long short-term memory (LSTM) based autoencoder [35,36], in order to better take into account time evolution when dealing with nonlinear unsteady parametrized PDEs (µ-POD-LSTM-ROM); second, it aims at performing extrapolation forward in time (compared to the training time window) of the PDE solution, for unseen values of the input parameters -a task often missed by traditional projection-based ROMs. Our final goal is to predict the PDE solution on a larger time domain (T in , T end ) than the one, (0, T ), used for the ROM training -here 0 ≤ T in ≤ T end and T end > T .…”
Section: Introductionmentioning
confidence: 97%
“…DL-ROMs outperform POD-based ROMs such as the reduced basis method -regarding both numerical accuracy and computational efficiency at testing stage. With the same spirit, POD-DL-ROMs [30] enable a more efficient training stage and the use of much larger FOM dimensions, without affecting network complexity, thanks to a prior dimensionality reduction of FOM snapshots through randomized POD (rPOD) [31], and a multi-fidelity pretraining stage, where different models (exploiting, e.g., coarser discretizations or simplified physical models) can be combined to iteratively initialize network parameters. This latter strategy has proven to be effective for instance in the real-time approximation of cardiac electrophysiology problems [32,33] and problems in fluid dynamics [34].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation