2022
DOI: 10.48550/arxiv.2206.10745
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning

Abstract: Neural operators have gained significant attention recently due to their ability to approximate high-dimensional parametric maps between function spaces. At present, only parametric function approximation has been addressed in the neural operator literature. In this work we investigate incorporating parametric derivative information in neural operator training; this information can improve function approximations, additionally it can be used to improve the approximation of the derivative with respect to the pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…where the choice of p depends on the regularity of the forward operator F. The choice p = 2 is often taken in practice, and additional learning of derivatives may be included via generalizing to T = W 1,p (M, ν M ; U), as in [71].…”
Section: Operator Learning With Neural Networkmentioning
confidence: 99%
See 3 more Smart Citations
“…where the choice of p depends on the regularity of the forward operator F. The choice p = 2 is often taken in practice, and additional learning of derivatives may be included via generalizing to T = W 1,p (M, ν M ; U), as in [71].…”
Section: Operator Learning With Neural Networkmentioning
confidence: 99%
“…In order to address this issue, the input and output spaces of neural operators are often restricted to some finite-dimensional reduced bases of M and U. Different architectures of neural operator incorporate different classical reduced basis representations such as proper orthogonal decomposition (POD) [33,34,39,40], Fourier representation [36], multipole graph representations [37], and derivative sensitivity bases [39,40,71] among others. These reduced basis representations offer a scalable means of learning structured maps between infinite-dimensional spaces, by taking advantage of their compact representations in specific bases, if such representations exist.…”
Section: Operator Learning With Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…In particular, [21,22] use similar algorithmic approaches to this work by applying a greedy algorithm to maximize the expected information gain. Common strategies for dealing with the high dimensions imposed by the PDE model use the framework in [24] for discretization, combined with parameter reduction methods (e.g., [25,26,27,28,29,30,31]) and model order reduction (MOR) methods for uncertainty quantification (UQ) problems (e.g., [32,33,34,35,36]).…”
Section: Introductionmentioning
confidence: 99%