2023
DOI: 10.1016/j.jcp.2022.111713
|View full text |Cite
|
Sign up to set email alerts
|

B-DeepONet: An enhanced Bayesian DeepONet for solving noisy parametric PDEs using accelerated replica exchange SGLD

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…A highly effective approach for tackling the inverse problem involves adopting the Bayesian inference framework, as referenced in prior works [6,10,23,33,35]. In this context, we define the measured flux data as d ∈ R n d and represent the prior distribution of ξ as p 0 (ξ), where ξ is the unknowns used to parameterize the shape of the source and the inverse quantities of interests (QoIs).…”
Section: Bayesian Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…A highly effective approach for tackling the inverse problem involves adopting the Bayesian inference framework, as referenced in prior works [6,10,23,33,35]. In this context, we define the measured flux data as d ∈ R n d and represent the prior distribution of ξ as p 0 (ξ), where ξ is the unknowns used to parameterize the shape of the source and the inverse quantities of interests (QoIs).…”
Section: Bayesian Frameworkmentioning
confidence: 99%
“…The choice of the temperature parameter τ has crucial influence on the sampling. Motivated by the replicaexchange methods [23,24,27] extends the preconditioned LD-based methods to multiple chains with different temperature parameters to accelerate distribution simulation.…”
Section: Langevin Diffusion and Bayesian Samplingmentioning
confidence: 99%
“…Furthermore, we aim to expand our research from bi-fidelity to multi-fidelity and Bayesian operator learning [28,[42][43][44]. This expansion aims to optimize the use of data while mitigating the computational costs associated with predicting computationally expensive data.…”
Section: Future Workmentioning
confidence: 99%
“…In that setting, the user can use fewer samples since the equations provide additional information to the training. Optimization algorithms which exploit DON structure have also been proposed to handle noisy data and train DON [19,20], which may be more practical for real data. DON has also been generalized to a wider class of nonlinear approximation problems using shifts [13].…”
Section: Introductionmentioning
confidence: 99%