2013 8th International Workshop on Systems, Signal Processing and Their Applications (WoSSPA) 2013
DOI: 10.1109/wosspa.2013.6602329
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian inference with hierarchical prior models for inverse problems in imaging systems

Abstract: Bayesian approach is nowadays commonly used for inverse problems. Simple prior laws (Gaussian, Generalized Gaussian, Gauss-Markov and more general Markovian priors) are common in modeling and in their use in Bayesian inference methods. But, we need still more appropriate prior models which can account for non stationnarities in signals and for the presence of the contours and homogeneous regions in images. Recently, we proposed a family of hierarchical prior models, called Gauss-MarkovPotts, which seems to be … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
1

Relationship

4
0

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 43 publications
0
6
0
Order By: Relevance
“…which we can use to infer on f , v and v f . For the expressions of the updating these tilded variables and more details see [30], [4], [31]. Figure 1 shows the generative graphical representations of the supervised and unsupervised models.…”
Section: Advantages Of Bayesian Versus Regularization Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…which we can use to infer on f , v and v f . For the expressions of the updating these tilded variables and more details see [30], [4], [31]. Figure 1 shows the generative graphical representations of the supervised and unsupervised models.…”
Section: Advantages Of Bayesian Versus Regularization Approachesmentioning
confidence: 99%
“…Finally, the third advantage is that we can do better than JMAP by using the Variational Bayesian Approximation methods and for example approximate the joint posterior law p(f , g 0 , v , v ξ , v f |g) by the following separable one (30) using the Kullback-Leibler divergence KL (q : p) = q ln q p (31) which gives rise to the VBA algorithms [32], [33], [4], [34], [35], [36], [37]. The following figure shows the graphical representations of the new forward model.…”
Section: A Direct Sparsity Casementioning
confidence: 99%
See 1 more Smart Citation
“…The details of this minimization are discussed in [17]. Thanks to the conjugacy property of the prior model, we obtain the conjugate distributions for z and the parameters.…”
Section: The Vba Algorithmmentioning
confidence: 99%
“…where the z j is a hidden variable which represents the inverse variance of f j [4,1]. This property of Infinite Gaussian Mixture gives the possibility to propose the following hierarchical model:…”
Section: Unsupervised Bayesianmentioning
confidence: 99%