2017
DOI: 10.1137/16m1076824
|View full text |Cite
|
Sign up to set email alerts
|

Well-Posed Bayesian Inverse Problems: Priors with Exponential Tails

Abstract: Abstract. We consider the well-posedness of Bayesian inverse problems when the prior measure has exponential tails. In particular, we consider the class of convex (log-concave) probability measures which include the Gaussian and Besov measures as well as certain classes of hierarchical priors. We identify appropriate conditions on the likelihood distribution and the prior measure which guarantee existence, uniqueness and stability of the posterior measure with respect to perturbations of the data. We also cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
32
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(34 citation statements)
references
References 33 publications
2
32
0
Order By: Relevance
“…Stability of the posterior with respect to the observed data y and the log-likelihood Φ was established for Gaussian priors by Stuart (2010) and for more general priors by many later contributions (Dashti et al, 2012;Hosseini, 2017;Hosseini and Nigam, 2017;Sullivan, 2017). (We note in passing that the stability of BIPs with respect to perturbation of the prior is possible but much harder to establish, particularly when the data y are highly informative and the normalisation constant Z(y) is close to zero; see e.g.…”
Section: Bayesian Inverse Problemsmentioning
confidence: 99%
“…Stability of the posterior with respect to the observed data y and the log-likelihood Φ was established for Gaussian priors by Stuart (2010) and for more general priors by many later contributions (Dashti et al, 2012;Hosseini, 2017;Hosseini and Nigam, 2017;Sullivan, 2017). (We note in passing that the stability of BIPs with respect to perturbation of the prior is possible but much harder to establish, particularly when the data y are highly informative and the normalisation constant Z(y) is close to zero; see e.g.…”
Section: Bayesian Inverse Problemsmentioning
confidence: 99%
“…satisfies (15). Unfortunately the reverse kernelQ * β in this case does not always have a closed form and must be identified on a case by case basis.…”
Section: The Sarsd Algorithmmentioning
confidence: 99%
“…Here G : X → R M is a deterministic forward map and Σ Σ Σ ∈ R M ×M is a positive-definite symmetric matrix. The additive Gaussian noise model above is widely used in practice [8,20,34] and it is the primary model in this article (see [15,20,34] for examples with other noise models). Using (2) we can readily identify µ u (y), the conditional probability measure of the data y given u, with Lebesgue density Here Λ denotes the Lebesgue measure and we used the familiar notation · Σ Σ Σ := Σ Σ Σ −1/2 · 2 .…”
Section: Introductionmentioning
confidence: 99%
“…We now show that by Theorem 10 we obtain well-posedness w.r.t. the Wasserstein distance under the same basic assumption on Φ or , respectively, stated in [7,38] as well as slightly modified in [21,22,40] for establishing well-posedness w.r.t. the Hellinger distance.…”
Section: Remark 13 (Proofs Via Couplingsmentioning
confidence: 99%