2016
DOI: 10.1088/0266-5611/32/7/075006
|View full text |Cite
|
Sign up to set email alerts
|

A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations

Abstract: Many scientific and engineering problems require to perform Bayesian inferences in function spaces, where the unknowns are of infinite dimension. In such problems, choosing an appropriate prior distribution is an important task. In particular, when the function to infer is subject to sharp jumps, the commonly used Gaussian measures become unsuitable. On the other hand, the so-called total variation (TV) prior can only be defined in a finite dimensional setting, and does not lead to a well-defined posterior mea… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
43
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 27 publications
(43 citation statements)
references
References 32 publications
0
43
0
Order By: Relevance
“…In the framework of Bayesian inference, both ξ and d are random variables. Then, the posterior probability density of ξ can be derived by the Bayesian rule, ie, πfalse(ξfalse|dfalse)πfalse(dfalse|ξfalse)πfalse(ξfalse), where π ( ξ ) is the prior distribution with available prior information before the data are observed; it can be hybrid, eg, π ( ξ ) can be the hybrid of Gaussian density and TV penalty, which has been proved to be well‐posed in the work of Yao et al The data are embodied by the likelihood function π ( d | ξ ) in the Bayesian formulation. For the convenience of notation, we will use π d ( ξ ) to denote the posterior density π ( ξ | d ) and L ( ξ ) to denote the likelihood function π ( d | ξ ).…”
Section: Bayesian Inference For Inverse Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the framework of Bayesian inference, both ξ and d are random variables. Then, the posterior probability density of ξ can be derived by the Bayesian rule, ie, πfalse(ξfalse|dfalse)πfalse(dfalse|ξfalse)πfalse(ξfalse), where π ( ξ ) is the prior distribution with available prior information before the data are observed; it can be hybrid, eg, π ( ξ ) can be the hybrid of Gaussian density and TV penalty, which has been proved to be well‐posed in the work of Yao et al The data are embodied by the likelihood function π ( d | ξ ) in the Bayesian formulation. For the convenience of notation, we will use π d ( ξ ) to denote the posterior density π ( ξ | d ) and L ( ξ ) to denote the likelihood function π ( d | ξ ).…”
Section: Bayesian Inference For Inverse Problemsmentioning
confidence: 99%
“…where ( ) is the prior distribution with available prior information before the data are observed; it can be hybrid, eg, ( ) can be the hybrid of Gaussian density and TV penalty, which has been proved to be well-posed in the work of Yao et al 36 The data are embodied by the likelihood function (d| ) in the Bayesian formulation. For the convenience of notation, we will use d ( ) to denote the posterior density ( |d) and L( ) to denote the likelihood function (d| ).…”
Section: Bayesian Inference For Inverse Problemsmentioning
confidence: 99%
“…However, in many practical problems, such as medical image reconstruction, the functions or images that one wants to recover are often subject to sharp jumps or discontinuities. The Gaussian prior distributions are typically not suitable for modeling such functions [42]. To this end several non-Gaussian priors have been proposed to model such images, e.g., [37].…”
Section: Introductionmentioning
confidence: 99%
“…Since these prior distributions differ significantly from Gaussian, many sampling schemes based on the Gaussian prior can not be used directly. To address the issue, a hybrid prior was proposed in [41]. The hybrid prior is motivated by the total variation (TV) regularization [31] in the deterministic setting; however, it has been proven in [20] that the TV based prior does not converge to a well-defined infinite-dimensional measure as the discretization dimension increases.…”
Section: Introductionmentioning
confidence: 99%
“…There is a substantial interest in the statistics literature in recent years in nonparametric inference for infinite-dimensional PDE models, see [15] for an overview and references, and Giné and [21] for a comprehensive monograph on the mathematical foundations of infinite-dimensional statistical models. This approach can result in MCMC algorithms that are robust with respect to the refinement of the discretisation level, see, for example, [11,12,14,39,50,52]. There are also other randomization and optimization based methods that have been recently proposed in the literature, see, for example, [2,51].…”
Section: Introductionmentioning
confidence: 99%