2013
DOI: 10.1109/tit.2013.2278179
|View full text |Cite
|
Sign up to set email alerts
|

Exact Recovery Conditions for Sparse Representations With Partial Support Information

Abstract: To cite this version:Cédric Herzet, Charles Soussen, Jérôme Idier, Rémi Gribonval. Abstract-We address the exact recovery of a k-sparse vector in the noiseless setting when some partial information on the support is available. This partial information takes the form of either a subset of the true support or an approximate subset including wrong atoms as well. We derive a new sufficient and worst-case necessary (in some sense) condition for the success of some procedures based on ℓp-relaxation, Orthogonal Match… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
51
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(51 citation statements)
references
References 44 publications
(172 reference statements)
0
51
0
Order By: Relevance
“…Consequently, the OLS and OMP algorithms coincide for the first iteration but usually differ afterward (see Section II-A for the justification). It has been empirically observed that OLS is computationally more expensive yet is more reliable than OMP [4]. For more details on the differences between these two algorithms, see [8] and the references therein.…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, the OLS and OMP algorithms coincide for the first iteration but usually differ afterward (see Section II-A for the justification). It has been empirically observed that OLS is computationally more expensive yet is more reliable than OMP [4]. For more details on the differences between these two algorithms, see [8] and the references therein.…”
Section: Introductionmentioning
confidence: 99%
“…In the past decade, Compressive Sensing (CS) has gained intensive attention, see [5,6,10,16,24] and references therein. It aims at recovering a sparsest vector from an underdetermined system of linear equations.…”
Section: Introductionmentioning
confidence: 99%
“…Unfortunately, although the l 0 -norm characterizes the sparsity of the vector x, the optimization problem (P 0 ) is actually NP-Hard because of the discrete and discontinuous nature of the l 0 -norm. This has resulted in many substitution models for (P 0 ), where x 0 is replaced with functions that evaluate the desirability of a would-be solution to Ax = b (see, e.g., [4], [11], [17], [20], [26], and references therein). Because of the relationship…”
Section: Introductionmentioning
confidence: 99%
“…A lot of excellent theoretical work (see, e.g., [4], [10], [12], [19]), together with some empirical evidence (see, e.g., [6]), has shown that, provided some conditions are met, such as assuming the restricted isometric property (RIP), the l 1 -norm minimization (P 1 ) can really make an exact recovery. The original notion of RIP has received much attention and has already been tailored to a more general case where 0 < p < 1 (see, e.g., [7], [9], [20]). Work undertaken by Donoho and Tanner in [11] using convex geometry demonstrated a surprising phenomenon that for any real matrix A, whenever the nonnegative solution to (P 0 ) is sufficiently sparse, it is also a unique solution to (P 1 ).…”
Section: Introductionmentioning
confidence: 99%