2021
DOI: 10.1063/5.0039986
|View full text |Cite
|
Sign up to set email alerts
|

Reduced-order modeling of advection-dominated systems with recurrent neural networks and convolutional autoencoders

Abstract: A common strategy for the dimensionality reduction of nonlinear partial differential equations (PDEs) relies on the use of the proper orthogonal decomposition (POD) to identify a reduced subspace and the Galerkin projection for evolving dynamics in this reduced space. However, advection-dominated PDEs are represented poorly by this methodology since the process of truncation discards important interactions between higher-order modes during time evolution. In this study, we demonstrate that encoding using convo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
87
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 213 publications
(87 citation statements)
references
References 58 publications
0
87
0
Order By: Relevance
“…PNODEs can still learn multiple trajectories, which are characterized by the ODE parameters, even if the same initial states are given for different ODE parameters, which is not achievable with NODEs. Furthermore, the proposed framework is significantly simpler than the common neural network settings for NODEs when they are used to learn latent dynamics: the sequence-tosequence architectures as in [3,5,14,21,22], which require that a (part of a) sequence is fed into the encoder network to produce a context vector, which is then fed into the NODE decoder network as an initial condition.…”
Section: (C) Learning Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…PNODEs can still learn multiple trajectories, which are characterized by the ODE parameters, even if the same initial states are given for different ODE parameters, which is not achievable with NODEs. Furthermore, the proposed framework is significantly simpler than the common neural network settings for NODEs when they are used to learn latent dynamics: the sequence-tosequence architectures as in [3,5,14,21,22], which require that a (part of a) sequence is fed into the encoder network to produce a context vector, which is then fed into the NODE decoder network as an initial condition.…”
Section: (C) Learning Frameworkmentioning
confidence: 99%
“…The reduced dimension is set as p = 20. Figure 15 (27) µ (21) µ (22) µ (23) µ (19) µ (13) µ (14) µ (15) m (20) µ (25) µ (24) µ (28) µ (29) µ (26) µ (16) µ (10) µ (5) µ (6) µ (8) µ (9) µ (3) µ (4) µ (18) µ (12) µ (2) µ (17) µ (11) µ (7) µ (1) Figure 16 depicts the height profiles over time and table 11 reports the relative 2 -errors for eight test parameter instances. Figure 16 and table 11 again confirm that the PNODE outperforms NODE by producing accurate parameter-dependent height profiles (i.e.…”
Section: (I) Data Preprocessing and Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…For this reason, [42] implemented a multi-fidelity strategy in which the parametric dependence was reconstructed using a large number of low-fidelity models and a minimal number of high-fidelity evaluations. Other approaches exploit machine learning to construct an input-output relationship, with convolutional neural networks [43] and autoencoders [44], which require the training of a network, again, using preexisting data. Note that most of the above methods lead to pROMs that are only evaluated in the online phase, i.e., no simulation is actually performed 1 , but the solutions at the known parameter locations are "interpolated" to obtain the result.…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, nonlinear dimension reduction techniques such as kernel POD [14] or deep learning-based approaches like autoencoders [15,16] have also been used for extracting a reduced basis. Combining autoencoder-generated bases with various specialized machine learning algorithms for time series modeling result in fully non-intrusive reduced order models [17,18,19]. Hybrid methods [20,21] can also be obtained by combining a nonlinear manifold learning technique like autoencoder for discovering the latent space with an intrusive method for the temporal dynamics.…”
Section: Introductionmentioning
confidence: 99%