2022
DOI: 10.48550/arxiv.2210.03008
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Residual-based error correction for neural operator accelerated infinite-dimensional Bayesian inverse problems

Abstract: We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems (BIPs) with models governed by nonlinear parametric partial differential equations (PDEs). Neural operators have gained significant attention in recent years for their ability to approximate the parameter-to-solution maps defined by PDEs using as training data solutions of PDEs at a limited number of parameter samples. The computational cos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 91 publications
0
2
0
Order By: Relevance
“…As far as we are aware, the work that is most closely related to our paper consists in [6,8]. The work [6] considers Bayesian inverse problems where the observation operator may be nonlinear and the model is approximated by a neural network.…”
Section: Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…As far as we are aware, the work that is most closely related to our paper consists in [6,8]. The work [6] considers Bayesian inverse problems where the observation operator may be nonlinear and the model is approximated by a neural network.…”
Section: Contributionsmentioning
confidence: 99%
“…As far as we are aware, the work that is most closely related to our paper consists in [6,8]. The work [6] considers Bayesian inverse problems where the observation operator may be nonlinear and the model is approximated by a neural network. In particular, [6, Theorem 1] bounds the Kullback-Leibler divergence between the original and approximated posterior in terms of an L p norm for p ≥ 2 of the model error itself.…”
Section: Contributionsmentioning
confidence: 99%