2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6638830
|View full text |Cite
|
Sign up to set email alerts
|

On exact l<inf>q</inf> denoising

Abstract: Recently, a lot of attention has been given to penalized least squares problem formulations for sparse signal reconstruction in the presence of noise. The penalty is responsible for inducing sparsity, where the common choice used is the convex l 1 norm. While an l 0 penalty generates maximum sparsity it has been avoided due to lack of convexity. With the hope of gaining improved sparsity but more computational tractability there has been recent interest in the l q penalty. In this paper we provide a novel cycl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 33 publications
0
13
0
Order By: Relevance
“…The design matrix A ∈ R 100×256 is also generated from N(0, 1) then column normalized. We set the signal-to-noise ratio at 16.5 to match the simulated example from Marjanovic and Solo [2013] which gives σ = 0.0369. Figure 5 plots the mean squared error (MSE) versus the log-regularisation penalty and the power in the L q -norm penalty.…”
Section: Poisson Fused Lassomentioning
confidence: 99%
“…The design matrix A ∈ R 100×256 is also generated from N(0, 1) then column normalized. We set the signal-to-noise ratio at 16.5 to match the simulated example from Marjanovic and Solo [2013] which gives σ = 0.0369. Figure 5 plots the mean squared error (MSE) versus the log-regularisation penalty and the power in the L q -norm penalty.…”
Section: Poisson Fused Lassomentioning
confidence: 99%
“…Recently, nonconvex feature selection methods for sparse signals estimation have gained increasing attention from the statistical learning community, including the classic SCAD penalty, l q penalty, and horseshoe regularization . There are a number of future directions for research, such as regularized logistic regression and structural sparsity learning .…”
Section: Discussionmentioning
confidence: 99%
“…While spike‐and‐slab priors provide full model uncertainty quantification, they can be hard to scale to very high‐dimensional problems and can have poor sparsity properties . On the other hand, techniques like proximal algorithms can solve nonconvex optimization problems, which are fast and scalable, but they generally do not provide a full assessment of model uncertainty …”
Section: Introductionmentioning
confidence: 99%
“…In our future work, two-dimensional directional multiscale approaches [44] may provide sparser representations for seismic data. Second, sparsity priors could be enforced on multiple signals as well, with a need for more automation in optimal choices on loss functions in the proximal formulation, potentially by using other measures than ℓ 1 [56][57][58]. The Bayesian framework provided in this work could also serve to develop other statistical approaches for multiple removal, e.g.…”
Section: Discussionmentioning
confidence: 99%