2021
DOI: 10.1016/j.acha.2020.01.002
|View full text |Cite
|
Sign up to set email alerts
|

ℓ1-Analysis minimization and generalized (co-)sparsity: When does recovery succeed?

Abstract: This paper investigates the problem of signal estimation from undersampled noisy sub-Gaussian measurements under the assumption of a cosparse model. Based on generalized notions of sparsity, we derive novel recovery guarantees for the 1 -analysis basis pursuit, enabling highly accurate predictions of its sample complexity. The corresponding bounds on the number of required measurements do explicitly depend on the Gram matrix of the analysis operator and therefore particularly account for its mutual coherence s… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 71 publications
(30 citation statements)
references
References 78 publications
0
30
0
Order By: Relevance
“…By using a more general analysis of [27], the authors in [19] obtain an explicit formula describing the required number of measurements. Their bound depends on the coherence structure of Ω.…”
Section: A Prior Workmentioning
confidence: 99%
“…By using a more general analysis of [27], the authors in [19] obtain an explicit formula describing the required number of measurements. Their bound depends on the coherence structure of Ω.…”
Section: A Prior Workmentioning
confidence: 99%
“…In [1], implicit formulas are derived for the upper-bound (5) in case of ℓ 1 and nuclear norm. Recently, an explicit upper-bound for U δ in case of ℓ 1 analysis and TV minimization is presented in [11]. The proposed bound depends on a notion called "generalized analysis sparsity" and is numerically observed to be tight for many analysis operators.…”
Section: Introductionmentioning
confidence: 99%
“…p ď n; due to the similarity of the arguments used in this paper, we include square matrices in the category of fat matrices. Tall Ω matrices cover various redundant 1 analysis operators that are common in practice; in particular, redundant wavelet frames [12], and redundant random frames (widely used as a benchmark template in [9], [11], [13], [14]). Also fat matrices include examples such as the one-dimensional finite difference operator Ω d and nonredundant random analysis operators (used in [11,Section 3.3]).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations