2019
DOI: 10.1109/tit.2019.2934152
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Approximate Message Passing With Non-Separable Denoisers and Markov Random Field Priors

Abstract: Approximate message passing (AMP) is a class of low-complexity, scalable algorithms for solving high-dimensional linear regression tasks where one wishes to recover an unknown signal from noisy, linear measurements. AMP is an iterative algorithm that performs estimation by updating an estimate of the unknown signal at each iteration and the performance of AMP (quantified, for example, by the mean squared error of its estimates) depends on the choice of a "denoiser" function that is used to produce these signal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
9
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…The behavior of AMP in the high dimensional limit is tracked by the state evolution equations. The convergence of AMP parameters to the state evolution has been proved under various assumptions (see [20][21][22][23][24][25][26]). Furthermore AMP has been successful as a near optimal decoder for sparse superposition codes [25,[27][28][29][30].…”
Section: Approximate Message Passing (Amp)mentioning
confidence: 99%
“…The behavior of AMP in the high dimensional limit is tracked by the state evolution equations. The convergence of AMP parameters to the state evolution has been proved under various assumptions (see [20][21][22][23][24][25][26]). Furthermore AMP has been successful as a near optimal decoder for sparse superposition codes [25,[27][28][29][30].…”
Section: Approximate Message Passing (Amp)mentioning
confidence: 99%
“…Remark 1: Theorem 2 shows that R AC of the CC scheme with S C can approach C G under two conditions: (a) the matching condition (22) holds and (b) both R C and δ → 0. These conditions require an underlying low-rate FEC code C that meets the matching condition (22). In practice, it is a highly complicated task to design such a low-rate code (see [15]- [17] for details).…”
Section: F Approaching Capacitymentioning
confidence: 99%
“…where the second equality follows from ( 25), (50), and (55). Thus, we have arrived at (56) with vsuf B→A,t ′ ,t given by (50), instead of (54).…”
mentioning
confidence: 99%
“…Since we have defined the covariance messages that are consistent to the state evolution recursions in the large system limit, we have V A→B,τ,τ a.s. → V A→B,τ,τ and V B→A,τ,τ a.s. → V B→A,τ,τ . Thus, (50) and ( 52) reduce to (54) and (58), respectively. This justifies the replacement of v suf B→A,t,t in W t with vsuf B→A,t,t .…”
mentioning
confidence: 99%