2019
DOI: 10.1137/18m123147x
|View full text |Cite
|
Sign up to set email alerts
|

A Scale-Invariant Approach for Sparse Signal Recovery

Abstract: In this paper, we study the ratio of the L 1 and L 2 norms, denoted as L 1 /L 2 , to promote sparsity. Due to the non-convexity and non-linearity, there has been little attention to this scale-invariant model. Compared to popular models in the literature such as the Lp model for p ∈ (0, 1) and the transformed L 1 (TL1), this ratio model is parameter free. Theoretically, we present a strong null space property (sNSP) and prove that any sparse vector is a local minimizer of the L 1 /L 2 model provided with this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
89
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 91 publications
(91 citation statements)
references
References 50 publications
1
89
1
Order By: Relevance
“…For the definition of 2,1 norm, see (5). Our method generalizes the 1 / 2 minimization method studied in [12,17,21,29,36]. We demonstrate in our numerical examples that it outperforms the 2,1 minimization commonly used to solve the joint sparse recovery problem in the sense that much fewer measurements are required for accurate sparse reconstructions.…”
Section: Introductionmentioning
confidence: 76%
“…For the definition of 2,1 norm, see (5). Our method generalizes the 1 / 2 minimization method studied in [12,17,21,29,36]. We demonstrate in our numerical examples that it outperforms the 2,1 minimization commonly used to solve the joint sparse recovery problem in the sense that much fewer measurements are required for accurate sparse reconstructions.…”
Section: Introductionmentioning
confidence: 76%
“…From Fig.1, we [70], where they using toy examples to illustrate the advantages of ℓ p and ℓ 1 /ℓ 2 , respectively, we also use a similar example to show that with some special data sets (A, b), the R (x) − R (x s ) tends to select a sparser solution.…”
Section: B Comparing With Other Regularizationmentioning
confidence: 99%
“…For one-sparse signal case, the L 1 /L 2 is the same as the L 0 norm. Recently, L 1 /L 2 minimization model has been empirically verified its efficiency when the sensing matrix is coherent and redundant; see, e.g., [8,[11][12][13][14][15][16][17]. More specifically, two types of L 1 /L 2 models are commonly used: the constrained and the penalized/unconstrained:…”
Section: Introductionmentioning
confidence: 99%
“…where the compressing matrix A ∈ R m×n and the observation of b ∈ R n . The constrained model (2) has been widely used in sparse recovery and MRI reconstruction [12,14]. However, since the penalized/unconstrained model (3) can tackle both noisy and noiseless observations while (2) can only deal with noiseless data, it turns to be more meaningful to develop efficient and convergent algorithm to solve (3).…”
Section: Introductionmentioning
confidence: 99%