2019
DOI: 10.3389/fams.2019.00023
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous Structures in Convex Signal Recovery—Revisiting the Convex Combination of Norms

Abstract: In compressed sensing one uses known structures of otherwise unknown signals to recover them from as few linear observations as possible. The structure comes in form of some compressibility including different notions of sparsity and low rankness. In many cases convex relaxations allow to efficiently solve the inverse problems using standard convex solvers at almost-optimal sampling rates. A standard practice to account for multiple simultaneous structures in convex optimization is to add further regularizers … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…and s > 0 is a parameter that represents the sparsity (or an approximation thereof) of the vector we are interested in recovering. Similar notions of atomic norms that promote simultaneous low rank and sparsity have appeared in [25], [29]. We will show in the next section that using • * ,s as a regularizer in lifted formulations of sparse phase retrieval and PCA gives sample complexity and error bounds nearly identical to the linear regression case.…”
Section: Key Tool: a Sparsity-and-low-rank-inducing Atomic Normmentioning
confidence: 84%
See 1 more Smart Citation
“…and s > 0 is a parameter that represents the sparsity (or an approximation thereof) of the vector we are interested in recovering. Similar notions of atomic norms that promote simultaneous low rank and sparsity have appeared in [25], [29]. We will show in the next section that using • * ,s as a regularizer in lifted formulations of sparse phase retrieval and PCA gives sample complexity and error bounds nearly identical to the linear regression case.…”
Section: Key Tool: a Sparsity-and-low-rank-inducing Atomic Normmentioning
confidence: 84%
“…The trace regularization in our estimator encourages low rank, while the 1 regularization encourages sparsity. However, recent work [24], [25] has shown it is impossible to take advantage of both kinds of structure simultaneously with a regularizer that is merely a convex combination of the two structure-inducing regularizers; the best we can do is exploit either the low rank as in non-sparse phase retrieval, in which case we get O(p) complexity, or the s 2 -sparsity, in which case we get O(s 2 ) complexity.…”
Section: Key Tool: a Sparsity-and-low-rank-inducing Atomic Normmentioning
confidence: 99%
“…If in addition, a sparsity constraint is to be imposed, the problem becomes considerably more difficult. In particular, linear combinations of the convex regularizers no longer lead to sampleefficient recovery guarantees [OJF + 15] even when using optimal tuning [KSJ19]. Only under additional structural assumptions, recovery guarantees are available using an alternating minimization approach [LLJB17,GKS19].…”
Section: Blind Deconvolutionmentioning
confidence: 99%
“…The recent work [7], which builds upon ideas on generalized projections in [10] and uses a Riemannian version of Iterative Hard Thresholding to reconstruct jointly row-sparse and low-rank matrices, only provides local convergence results. The alternative approach of using optimally weighted sums or maxima of convex regularizers [19] -the only existing work apart from our predecessor paper [9] that considers nonorthogonal sparse low-rank decompositions -requires optimal tuning of the parameters under knowledge of the ground-truth. Whereas one may not hope for a general solution to these problems, the previously mentioned tractable results on nested measurements [2,10] suggest that suitable initialization methods can be found for specific applications.…”
Section: Related Work IImentioning
confidence: 99%
“…) 2 leading to a bound as in (19). Unfortunately, it is not as simple to bound γ 1 (ΓK R s1,s2 − ΓK R s1,s2 , • ) 2 in an intuitive way without deriving a tight bound on the covering number of ΓK R s1,s2 in operator norm.…”
Section: Rank-1 Measurement Operatormentioning
confidence: 99%