2020
DOI: 10.1088/1751-8121/ab59ef
|View full text |Cite
|
Sign up to set email alerts
|

On the universality of noiseless linear estimation with respect to the measurement matrix

Abstract: In a noiseless linear estimation problem, one aims to reconstruct a vector x * from the knowledge of its linear projections y = Φx * . There have been many theoretical works concentrating on the case where the matrix Φ is a random i.i.d. one, but a number of heuristic evidence suggests that many of these results are universal and extend well beyond this restricted case. Here we revisit this problematic through the prism of development of message passing methods, and consider not only the universality of the 1 … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(44 citation statements)
references
References 34 publications
(71 reference statements)
0
44
0
Order By: Relevance
“…Since their inception, AMP algorithms have found applications in diverse situations-on the one hand, they are directly used as computationally efficient inference algorithms in compressed sensing [25] and coding theory [62]; on the other hand, these algorithms have been used as constructive proof devices to characterize the asymptotic performance of statistical procedures such as the LASSO [5], M-estimators [7,22,35,34], maximum likelihood [67,66], and spectral methods [54,52] in high-dimensions. Given a data matrix M ∈ R N ×N , an AMP algorithm in its general form consists of the following iterative updates: z (t) = M F t (z (0) , z (1) , . .…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…Since their inception, AMP algorithms have found applications in diverse situations-on the one hand, they are directly used as computationally efficient inference algorithms in compressed sensing [25] and coding theory [62]; on the other hand, these algorithms have been used as constructive proof devices to characterize the asymptotic performance of statistical procedures such as the LASSO [5], M-estimators [7,22,35,34], maximum likelihood [67,66], and spectral methods [54,52] in high-dimensions. Given a data matrix M ∈ R N ×N , an AMP algorithm in its general form consists of the following iterative updates: z (t) = M F t (z (0) , z (1) , . .…”
Section: Introductionmentioning
confidence: 99%
“…AMP algorithms are particularly attractive due to their theoretical tractability. Specifically, when the data matrix M is drawn from a rotationally-invariant ensemble (such as the Gaussian orthogonal ensemble), and if the function G t (called the Onsager correction) is suitably chosen based on F t , the joint empirical distributions of the iterates z (1) , . .…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations