2019
DOI: 10.1007/s10915-019-00955-w
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Edges from Non-uniform Fourier Data via Sparse Bayesian Learning

Abstract: In recent investigations, the problem of detecting edges given non-uniform Fourier data was reformulated as a sparse signal recovery problem with an 1 -regularized least squares cost function. This result can also be derived by employing a Bayesian formulation. Specifically, reconstruction of an edge map using 1 regularization corresponds to a so-called type-I (maximum a posteriori ) Bayesian estimate. In this paper, we use the Bayesian framework to design an improved algorithm for detecting edges from non-uni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…7 As noted previously, in the case of sparse signals, SBL with EM yields a comparable posterior, in the sense that it uses an empirically based support informed prior, to that in (2.19b). It is not computationally efficient in higher dimensions, however, [15,34,35]. To further simplify the analysis we choose the non-zero values of the underlying signal to be exactly 1.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…7 As noted previously, in the case of sparse signals, SBL with EM yields a comparable posterior, in the sense that it uses an empirically based support informed prior, to that in (2.19b). It is not computationally efficient in higher dimensions, however, [15,34,35]. To further simplify the analysis we choose the non-zero values of the underlying signal to be exactly 1.…”
Section: Numerical Experimentsmentioning
confidence: 99%
“…, β k ) is a diagonal inverse covariance matrix. Ideas discussed in [50,52,16,11,15,7,5,13,19,20] suggest that conditionally Gaussian priors of the form (1.4) are particularly suited to promote sparsity of Rx. For example, the model proposed in [50] is designed to recover sparse representations of kernel approximations, coining the term sparse Bayesian learning (SBL).…”
Section: Introduction Many Applications Seek To Solve the Linear Inve...mentioning
confidence: 99%
“…Promoting sparse solutions, as done in [50], corresponds to using R = I ∈ R n×n as a regularization operator in (1.4). Further investigations that made use of SBL to promote sparse solutions include [52,55,54,13,19]. In many applications, however, it is some linear transformation Rx that is desired to be sparse.…”
Section: Introduction Many Applications Seek To Solve the Linear Inve...mentioning
confidence: 99%
See 2 more Smart Citations