2013 IEEE International Symposium on Information Theory 2013
DOI: 10.1109/isit.2013.6620309
|View full text |Cite
|
Sign up to set email alerts
|

Fixed points of generalized approximate message passing with arbitrary matrices

Abstract: Abstract-The estimation of a random vector with independent components passed through a linear transform followed by a componentwise (possibly nonlinear) output map arises in a range of applications. Approximate message passing (AMP) methods, based on Gaussian approximations of loopy belief propagation, have recently attracted considerable attention for such problems. For large random transforms, these methods exhibit fast convergence and admit precise analytic characterizations with testable conditions for op… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
65
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 54 publications
(68 citation statements)
references
References 46 publications
(37 reference statements)
3
65
0
Order By: Relevance
“…This is a result of the Berry-Esseen central limit theorem which states that a sum of random variables converge to a Gaussian density; see a proof of this theorem inDonoho et al (2011). Given that the sumproduct equations involve products of random variables, rather than sums, derivations of GAMP based on this central limit theorem typically proceed by taking logarithms of equations (26)-(28). The marginal posterior p (β i |y) is then recovered by performing an exponential transformation of the log messages, and by normalizing so that the posterior integrates to one; see the Online Appendix, Section A, for details.…”
mentioning
confidence: 99%
“…This is a result of the Berry-Esseen central limit theorem which states that a sum of random variables converge to a Gaussian density; see a proof of this theorem inDonoho et al (2011). Given that the sumproduct equations involve products of random variables, rather than sums, derivations of GAMP based on this central limit theorem typically proceed by taking logarithms of equations (26)-(28). The marginal posterior p (β i |y) is then recovered by performing an exponential transformation of the log messages, and by normalizing so that the posterior integrates to one; see the Online Appendix, Section A, for details.…”
mentioning
confidence: 99%
“…The statistical behavior of AMP can be characterized theoretically for large iid subgaussian random feature matrices [15], and empirical results suggest that the theory holds more generally for certain types of nonrandom matrices [13]. One of the challenges with AMP, however, is that for arbitrary matrices convergence of the AMP iterations may require dampening [16] or serial updates [17]. Recent work has shown that stable points of the AMP iterations correspond to stationary points of an approximation to the Bethe free energy [16] and developed optimization methods which attempt to minimize the approximate Bethe free energy directly [18].…”
Section: B Relation To Previous Workmentioning
confidence: 99%
“…One of the challenges with AMP, however, is that for arbitrary matrices convergence of the AMP iterations may require dampening [16] or serial updates [17]. Recent work has shown that stable points of the AMP iterations correspond to stationary points of an approximation to the Bethe free energy [16] and developed optimization methods which attempt to minimize the approximate Bethe free energy directly [18]. While this leads to methods with guaranteed convergence, the statistical behavior of the solution is not fully understood for general matrices.…”
Section: B Relation To Previous Workmentioning
confidence: 99%
“…After [RSR+13], we now consider the following trial distribution, which can be seen as providing independent approximations to the distributions of x and z = F x…”
Section: Bethe Approximationmentioning
confidence: 99%
“…For a Gaussian likelihood P (y|z) ∝ e − y−z 2∆ the g function (4.57) becomes g(y, ω, V ) = y−ω ∆+V . At this point, [RSR+13] employs two approximations: weak consistency constraints that give…”
Section: By Definingmentioning
confidence: 99%