2015
DOI: 10.1137/140952363
|View full text |Cite
|
Sign up to set email alerts
|

Minimization of $\ell_{1-2}$ for Compressed Sensing

Abstract: We study minimization of the difference of ℓ 1 and ℓ 2 norms as a non-convex and Lipschitz continuous metric for solving constrained and unconstrained compressed sensing problems. We establish exact (stable) sparse recovery results under a restricted isometry property (RIP) condition for the constrained problem, and a full-rank theorem of the sensing matrix restricted to the support of the sparse solution. We present an iterative method for ℓ 1−2 minimization based on the difference of convex functions algorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
317
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 443 publications
(327 citation statements)
references
References 63 publications
9
317
0
1
Order By: Relevance
“…It is shown in [21,40] that any limit point of the DCA sequence converges to a stationary point; and in Theorem 5, we give a tighter result, which states that the limit point is dstationary rather than stationary. These stationarity concepts are related as the set of local minimizers belongs to the set of d-stationary points, which belongs to the set of stationary points.…”
Section: Stationary Pointsmentioning
confidence: 67%
See 3 more Smart Citations
“…It is shown in [21,40] that any limit point of the DCA sequence converges to a stationary point; and in Theorem 5, we give a tighter result, which states that the limit point is dstationary rather than stationary. These stationarity concepts are related as the set of local minimizers belongs to the set of d-stationary points, which belongs to the set of stationary points.…”
Section: Stationary Pointsmentioning
confidence: 67%
“…The former is recently proposed in [22,40] as an alternative to L 1 for CS, and the latter is often used in statistics and machine learning [33,41]. Numerical simulations show that L 1 minimization often fails when MSF < 1, in which case we demonstrate that both L 1−2 and CL 1 outperform the classical L 1 method.…”
Section: Our Contributionsmentioning
confidence: 73%
See 2 more Smart Citations
“…vTGV enhances the smoothness of the brain image and reconstructs the spatial distribution of the current density more precisely. Meanwhile, motivated by the performance of the ℓ 1−2 regularization in compressive sensing reconstruction and other image processing problems (Esser et al, 2013; Lou et al, 2014; Yin et al, 2015), we incorporate the ℓ 1−2 regularization into the objective function. Numerical experiments show that ℓ 1−2 regularization provides faster convergence and yields sparser source image than the ℓ 1 -norm regularization.…”
Section: Introductionmentioning
confidence: 99%