2021
DOI: 10.48550/arxiv.2103.06183
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Deep Learning approach to Reduced Order Modelling of Parameter Dependent Partial Differential Equations

Abstract: Within the framework of parameter dependent PDEs, we develop a constructive approach based on Deep Neural Networks for the efficient approximation of the parameter-to-solution map. The research is motivated by the limitations and drawbacks of state-of-the-art algorithms, such as the Reduced Basis method, when addressing problems that show a slow decay in the Kolmogorov n-width. Our work is based on the use of deep autoencoders, which we employ for encoding and decoding a high fidelity approximation of the solu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 51 publications
(86 reference statements)
0
8
0
Order By: Relevance
“…The takeaway is that even for modern artificial intelligence-based learning techniques, a decomposition of the form (1) seems like a crucial first step-particularly when one seeks accurate predictions. The bounds on the latent space of an auto-encoder developed in [20] further elucidate the importance of (at least) Lipschitz continuity while applying auto-encoders.…”
Section: Relation To Previous Workmentioning
confidence: 98%
See 1 more Smart Citation
“…The takeaway is that even for modern artificial intelligence-based learning techniques, a decomposition of the form (1) seems like a crucial first step-particularly when one seeks accurate predictions. The bounds on the latent space of an auto-encoder developed in [20] further elucidate the importance of (at least) Lipschitz continuity while applying auto-encoders.…”
Section: Relation To Previous Workmentioning
confidence: 98%
“…We use the projection operator Π to project g N (•, t) onto X N and collect the resulting degrees of freedom in a vector G(t) ∈ R N . Likewise, since we expect ϕ(•, t) to be a diffeomorphism (explained later) it is reasonable to project φ(•, t) onto the finite-element space Xn using the projection operator Π given in (20). We compute the value φ(x i , t) at the i-th vertex xi using the non-linear least-squares problem given as…”
Section: Rom For G N and φmentioning
confidence: 99%
“…Interest on the topic and practical applications increased recently at a fast pace [13,14,15], while variations aimed at introducing physics related losses to link more deeply the ANN framework with the underlying physical model, e.g., with the concept of physics-informed neural networks (PINNs) [16,17], show the significance ANNs are gaining in scientific computing. Furthermore, some relevant theoretical results concerning the complexity bounds of the problem and the error of the approximation have also been investigated, for example in [18,19,20,21], thus providing a rigorous mathematical framework to the related problems.…”
Section: Introductionmentioning
confidence: 99%
“…A recently proposed strategy [29,21] aims at constructing DL-based ROMs (DL-ROMs) for nonlinear time-dependent parametrized PDEs in a non-intrusive way, approximating the PDE solution manifold by means of a low-dimensional, nonlinear trial manifold, and the nonlinear dynamics of the generalized coordinates on such reduced trial manifold, as a function of the time coordinate and the parameters. The former is learnt by means of the decoder function of a convolutional autoencoder (CAE) neural network; the latter through a (deep) feedforward neural network (DFNN), and the encoder function of the CAE.…”
Section: Introductionmentioning
confidence: 99%
“…However, for physical phenomena characterized by a slow N -width decay, such as those featuring coherent structures that propagate over time [7], the manifold spanned by all the possible solutions is not of small dimension, so that ROMs relying on linear (global) subspaces might be inefficient. Alternative strategies to overcome this bottleneck can be, e.g., local RB methods [8,9,10], or nonlinear approximation techniques, mainly based on deep learning architectures, see, e.g., [11,12,13,14,15].…”
Section: Introductionmentioning
confidence: 99%