2021
DOI: 10.48550/arxiv.2111.14486
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Just Least Squares: Binary Compressive Sampling with Low Generative Intrinsic Dimension

Abstract: In this paper, we consider recovering n dimensional signals from m binary measurements corrupted by noises and sign flips under the assumption that the target signals have low generative intrinsic dimension, i.e., the target signals can be approximately generated via an L-Lipschitz generator G :Although the binary measurements model is highly nonlinear, we propose a least square decoder and prove that, up to a constant c, with high probability, the least square decoder achieves a sharp estimation error O( k lo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…Non-linear measurement models. While Theorem 1 concerns linear observation models, analogous guarantees have been provided for a variety of non-linear measurement models, including 1-bit observations [85], [105], [69], spiked matrix models [7], [28], phase retrieval [84], [53], principal component analysis [87], and general single-index models [88], [83], [86]. While these each come with their own challenges, the intuition behind their associated results is often similar to that discussed above for the linear model, with the m = O k log Lr δ scaling typically remaining.…”
Section: E Further Developmentsmentioning
confidence: 99%
“…Non-linear measurement models. While Theorem 1 concerns linear observation models, analogous guarantees have been provided for a variety of non-linear measurement models, including 1-bit observations [85], [105], [69], spiked matrix models [7], [28], phase retrieval [84], [53], principal component analysis [87], and general single-index models [88], [83], [86]. While these each come with their own challenges, the intuition behind their associated results is often similar to that discussed above for the linear model, with the m = O k log Lr δ scaling typically remaining.…”
Section: E Further Developmentsmentioning
confidence: 99%
“…Non-linear measurement models. While Theorem 1 concerns linear observation models, analogous guarantees have been provided for a variety of non-linear measurement models, including 1-bit observations [78], [98], [65], spiked matrix models [6], [25], phase retrieval [77], [49], principal component analysis [80], and general single-index models [81], [76], [79]. While these each come with their own challenges, the intuition behind their associated results is often similar to that discussed above for the linear model, with the m = O k log Lr δ scaling typically remaining.…”
Section: E Further Developmentsmentioning
confidence: 99%