2021
DOI: 10.48550/arxiv.2103.15618
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Empirical Bayesian Inference using Joint Sparsity

Jiahui Zhang,
Anne Gelb,
Theresa Scarnati

Abstract: This paper develops a new empirical Bayesian inference algorithm for solving a linear inverse problem given multiple measurement vectors (MMV) of under-sampled and noisy observable data. Specifically, by exploiting the joint sparsity across the multiple measurements in the sparse domain of the underlying signal or image, we construct a new support informed sparsity promoting prior. Several applications can be modeled using this framework, and as a prototypical example we consider reconstructing an image from s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…In this framework the hyper-parameters are often approximated by the Expectation Maximization (EM) [22] or the evidence maximization approach [43]. In some cases the hyper-parameters for the sparse prior are determined empirically from the given data [66,52,70]. Finally, we note that (joint recovery) SBL is designed for stationary support in the sparse domain, which is pointedly not our assumption in this investigation.…”
Section: Sparse Bayesian Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…In this framework the hyper-parameters are often approximated by the Expectation Maximization (EM) [22] or the evidence maximization approach [43]. In some cases the hyper-parameters for the sparse prior are determined empirically from the given data [66,52,70]. Finally, we note that (joint recovery) SBL is designed for stationary support in the sparse domain, which is pointedly not our assumption in this investigation.…”
Section: Sparse Bayesian Learningmentioning
confidence: 99%
“…That is, we introduce a prior that simultaneously promotes intraimage sparsity and inter-image similarity. To this end, we note that the classic sparse Bayesian learning (SBL) [60,67,71,19,70] requires a shared support of all the collected measurements to approximate edges. Such an assumption will clearly be violated when change occurs between sequential data collections.…”
Section: Introductionmentioning
confidence: 99%
“…The idea of incorporating space-variant information is in fact not new in the context of mathematical methods for image reconstruction. Early approaches can already be traced back looking at the contributions in the field of diffusion-type PDEs for imaging [115,129,117,120,130] and statistical approaches [51,116,29,26,25,114,134]. In the last couple of years and under a different perspective, few contributions have also been made in the context of (deep) learning approaches for imaging [64,99,100,102,86].…”
Section: Incorporating Space Variancementioning
confidence: 99%
“…We further mention that the classical literature on compressed sensing algorithms has been revisited and interpreted in probabilistic terms; in this perspective, the popular 1 -regularisation terms can be thought of as deriving from a support-informed or spatially-adaptive prior, where the local weights are typically estimated starting from the observable data following an empirical Bayesian approach -see [134] and references therein.…”
Section: Statistical Approachesmentioning
confidence: 99%