2019
DOI: 10.1007/s10444-019-09698-6
|View full text |Cite
|
Sign up to set email alerts
|

Sparse power factorization: balancing peakiness and sample complexity

Abstract: In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse Power Factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery gua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 39 publications
0
9
0
Order By: Relevance
“…Other locally convergent methods applied to the recovery of row-sparse (or column-sparse) and low-rank matrices are the sparse power factorization (SPF) and its subspaceconcatenated variant (SCSPF), see [19]. While the latter work assumes a high peak-to-average power ratio on the singular vectors of the observed matrix, [13] recently enlarged the class of recoverable matrices by relaxing this constraint.…”
Section: Jointly Low-rank and Bisparse Recoverymentioning
confidence: 99%
“…Other locally convergent methods applied to the recovery of row-sparse (or column-sparse) and low-rank matrices are the sparse power factorization (SPF) and its subspaceconcatenated variant (SCSPF), see [19]. While the latter work assumes a high peak-to-average power ratio on the singular vectors of the observed matrix, [13] recently enlarged the class of recoverable matrices by relaxing this constraint.…”
Section: Jointly Low-rank and Bisparse Recoverymentioning
confidence: 99%
“…A class of structured signals that is important in many applications are matrices which are simultaneously sparse and of low rank. Such matrices occur in sparse phase retrieval 1 [5,17,18], dictionary learning and sparse encoding [19], sparse matrix approximation [20], sparse PCA [21], bilinear compressed sensing problems like sparse blind deconvolution [22][23][24][25][26][27] or, more general, sparse self-calibration [28]. For example, upcoming challenges in communication engineering and signal processing require efficient algorithms for such problems with theoretical guarantees [29][30][31].…”
Section: Simultaneously Sparse and Low-rank Matricesmentioning
confidence: 99%
“…For a suboptimal but tractable initialization recovery can only be guaranteed for a considerably restricted set of very peaky signals. Relaxed conditions have been worked out recently [27] with the added benefit that the intrinsic balance between additivity and multiplicativity in sparsity is more explicitly established. Further alternating algorithms like [33] have been proposed with guaranteed local convergence and which have better empirical performance.…”
Section: Some More Details On Related Workmentioning
confidence: 99%
“…For unstructured Gaussian measurements, local guarantees are available for the alternating algorithms Sparse Power Factorization [20] and Alternating Tikhonov regularization and Lasso [11]. Suitable initialization procedures to complement these methods by constructing a starting point in a small enough neighborhood of the solution, however, are known only for certain special classes of signals such as signals with few dominant entries [20,14]. Despite the recent progress, it remains largely an open problem whether and how joint (bi-)sparse and low rank signals can be efficiently recovered from a near-minimal number of measurements when no such initialization is provided.…”
Section: Introductionmentioning
confidence: 99%