2015
DOI: 10.1109/tsp.2015.2421476
|View full text |Cite
|
Sign up to set email alerts
|

Homotopy Based Algorithms for $\ell _{\scriptscriptstyle 0}$-Regularized Least-Squares

Abstract: Sparse signal restoration is usually formulated as the minimization of a quadratic cost function y − Ax 2 2 , where A is a dictionary and x is an unknown sparse vector. It is well-known that imposing an ℓ 0 constraint leads to an NP-hard minimization problem. The convex relaxation approach has received considerable attention, where the ℓ 0 -norm is replaced by the ℓ 1 -norm. Among the many efficient ℓ 1 solvers, the homotopy algorithm minimizes y − Ax 2 2 + λ x 1 with respect to x for a continuum of λ's. It is… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
24
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(25 citation statements)
references
References 57 publications
1
24
0
Order By: Relevance
“…The sparsityconstrained formulation (2) defines a sparse coding problem with a predefined sparsity parameter T . Considering the errorconstrained optimization problem (3), it is easy to make OMP satisfy the constraint by measuring the reconstruction error each time after adding a non-zero entry [7]; The proximal method will search for the Pareto optimal when the sparsity level varies [40]; MIQP keeps all the signals in the constraint based on the decided sparsity of initialization obtained by the proximal method. As recommended in [7], = c n σ 2 with c = 1.15 and a maximum sparsity parameter T m (usually the same as T ) set to assure the sparse level.…”
Section: Large-scale (Global) Dictionary Learningmentioning
confidence: 99%
“…The sparsityconstrained formulation (2) defines a sparse coding problem with a predefined sparsity parameter T . Considering the errorconstrained optimization problem (3), it is easy to make OMP satisfy the constraint by measuring the reconstruction error each time after adding a non-zero entry [7]; The proximal method will search for the Pareto optimal when the sparsity level varies [40]; MIQP keeps all the signals in the constraint based on the decided sparsity of initialization obtained by the proximal method. As recommended in [7], = c n σ 2 with c = 1.15 and a maximum sparsity parameter T m (usually the same as T ) set to assure the sparse level.…”
Section: Large-scale (Global) Dictionary Learningmentioning
confidence: 99%
“…The stronger (i.e., the more restrictive) the optimality condition verified by points attained by a given algorithm, the "better" the algorithm. Moreover, these conditions can also give rise to new iterative algorithms [2,4,44]. Although a variety of necessary optimality conditions with different degree of sophistication have been defined, analyzed, and hierarchized (see Section 2 and Figure 1), there is a lack of connection between some of them.…”
Section: Necessary Optimality Conditionsmentioning
confidence: 99%
“…By combining the inequalities (43) and (44) with the definition of the CEL0 penalty in (3), we obtain…”
Section: D21 Determination Of ηmentioning
confidence: 99%
“…The NMSE is defined as ||x −x|| 2 /||x|| 2 , and the SNR is defined as 20 log(||x −x|| 2 /||x|| 2 ). In order to visually display this performance, we choose the SL0 [12,13], 2 -SL0 [17][18][19] and p -RLS [21] algorithms for comparison.…”
Section: Numerical Simulation and Analysismentioning
confidence: 99%
“…These methods give adequate consideration on sparsity and convergence of the solution, but they are unstable in a noisy environment. Based on this, the 2 -SL0 [17][18][19] transformed the 0 -norm problem into the regularized least squares problem (LSP) [20], which includes sparsity regularizer and deviation term, to improve the performance of sparse vector recovery under noisy conditions. Further, p -RLS [21] that converts the sparsity regularizer into p -norm is introduced to effectively reduce the computational complexity.…”
Section: Introductionmentioning
confidence: 99%