2017
DOI: 10.1109/tit.2016.2642993
|View full text |Cite
|
Sign up to set email alerts
|

High-Dimensional Estimation of Structured Signals From Non-Linear Observations With General Convex Loss Functions

Abstract: In this paper, we study the issue of estimating a structured signal x 0 ∈ R n from non-linear and noisy Gaussian observations. Supposing that x 0 is contained in a certain convex subset K ⊂ R n , we prove that accurate recovery is already feasible if the number of observations exceeds the effective dimension of K, which is a common measure for the complexity of signal classes. It will turn out that the possibly unknown nonlinearity of our model affects the error rate only by a multiplicative constant. This ach… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
44
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
1

Relationship

4
5

Authors

Journals

citations
Cited by 38 publications
(46 citation statements)
references
References 50 publications
2
44
0
Order By: Relevance
“…The squared loss then just corresponds to L(v 1 , v 2 ) = 1 2 (v 1 − v 2 ) 2 . Under relatively mild conditions on L, such as restricted strong convexity, similar recovery guarantees as above can be proven; see again [Gen17].…”
Section: Further Extensionsmentioning
confidence: 66%
See 1 more Smart Citation
“…The squared loss then just corresponds to L(v 1 , v 2 ) = 1 2 (v 1 − v 2 ) 2 . Under relatively mild conditions on L, such as restricted strong convexity, similar recovery guarantees as above can be proven; see again [Gen17].…”
Section: Further Extensionsmentioning
confidence: 66%
“…A somewhat astonishing observation of [PV16;Gen17] was that recovery from single-index observations (1.2) is already possible by means of the vanilla Lasso [Tib96], even though the non-linear distortion f is completely unknown. As we shall see next, such a strategy can be also adapted to the more advanced measurement scheme of (1.4).…”
Section: Algorithmic Approachesmentioning
confidence: 99%
“…One can also imagine extending our results to general quantization schemes, e.g, general number of quantization levels [TAH15]. Finally, considering other loss functions in (3) (e.g., see [Gen17]) and the performance of first-order solvers (e.g., see [OS16]) are also interesting directions to pursue. Mark…”
Section: Future Workmentioning
confidence: 99%
“…Despite the universal applicability of the Lasso, practitioners however often choose different types of loss functions for (P L,K ), which are specifically tailored to their model hypotheses, e.g., if the output variables y i are discrete. This issue particularly motivated the first author in [Gen17] to extent the framework of Plan and Vershynin to other choices of L. A key finding of [Gen17] is that, in many situations of interest, restricted strong convexity (RSC) is a crucial property of an empirical risk function to ensure successful signal recovery via (P L,K ). The criterion of RSC is indeed satisfied for a large class of loss functions, for instance, all those L : R × R → R which are twice differentiable in the first variable and locally strongly convex in a neighborhood of the origin (cf.…”
Section: Signal Processing and Compressed Sensingmentioning
confidence: 99%