Bayesian Brain 2006
DOI: 10.7551/mitpress/9780262042383.003.0011
|View full text |Cite
|
Sign up to set email alerts
|

Neural Models of Bayesian Belief Propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 34 publications
0
16
0
Order By: Relevance
“…Likewise, much work in computational neuroscience focuses on the implementational level, but is Bayesian in character (e.g., Pouget, Dayan, & Zemel, 2003;T. Lee & Mumford, 2003;Zemel, Huys, Natarajan, & Dayan, 2005;Ma, Beck, Latham, & Pouget, 2006;Doya, Ishii, Pouget, & Rao, 2007;Rao, 2007). We discuss the implications of this work in the next section.…”
Section: Optimality: What Does It Mean?mentioning
confidence: 99%
See 1 more Smart Citation
“…Likewise, much work in computational neuroscience focuses on the implementational level, but is Bayesian in character (e.g., Pouget, Dayan, & Zemel, 2003;T. Lee & Mumford, 2003;Zemel, Huys, Natarajan, & Dayan, 2005;Ma, Beck, Latham, & Pouget, 2006;Doya, Ishii, Pouget, & Rao, 2007;Rao, 2007). We discuss the implications of this work in the next section.…”
Section: Optimality: What Does It Mean?mentioning
confidence: 99%
“…Spiking neurons can be modelled as Bayesian integrators accumulating evidence over time (Deneve, 2004;Zemel et al, 2005). Recurrent neural circuits are capable of performing both hierarchical and sequential Bayesian inference (Deneve, 2004;Rao, 2004Rao, , 2007. Even specific brain areas have been studied: for instance, there is evidence that the recurrent loops in the visual cortex integrate top-down priors and bottom-up data in such a way as to implement hierarchical Bayesian inference (T. Lee & Mumford, 2003).…”
Section: Biological Plausibilitymentioning
confidence: 99%
“…According to current evidence-integration models (see Bogacz, Brown, Moehlis, Holmes, & Cohen, 2006), in any interval during the decision process, having made the series of observations y and a new observation y new , the decision-maker updates a representation of the posterior probabilities p ( x | y ) by combining them with the likelihoods p ( y new | x ): p ( x | y , y new ) ∝ p ( y new | x ) p ( x | y ). In so-called random walk or drift-diffusion models of two-alternative forced choice decision (Figure 4, top), accumulated evidence is represented in the form of a log posterior ratio, to which is added a log-likelihood ratio representing the evidence from each new observation (see Beck & Pouget, 2007; Bogacz, et al, 2006; Gold & Shadlen, 2007; Rao, 2006; Ratcliff & McKoon, 2008). Given an unlimited number of observations, this procedure is guaranteed to converge to the correct hypothesis.…”
Section: Algorithmic Frameworkmentioning
confidence: 99%
“…Several models have been proposed for neural implementation of Bayesian inference (see Rao, 2007 for a review). We focus here on one potential implementation.…”
Section: Modelmentioning
confidence: 99%