2016
DOI: 10.1016/j.neucom.2015.10.045
|View full text |Cite
|
Sign up to set email alerts
|

Sampling-based causal inference in cue combination and its neural implementation

Abstract: Causal inference in cue combination is to decide whether the cues have a single cause or multiple causes. Although the Bayesian causal inference model explains the problem of causal inference in cue combination successfully, how causal inference in cue combination could be implemented by neural circuits, is unclear. The existing method based on calculating log posterior ratio with variable elimination has the problem of being unrealistic and task-specific. In this paper, we take advantages of the special struc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 35 publications
0
12
0
Order By: Relevance
“…In contrast, Cuppini, Ursino and collaborators created a neural network based on two parallel layers representing two unisensory modalities as well as a crossmodal layer receiving inputs from the uni-sensory layers 91,100,101 . Although not explicitly encoding probabilities, the connectivity within, and across layers, produced behavior very similar to the non-linear cue combination of causal inference, with cues being pulled together when close together, but unaffected when further apart.…”
Section: Implementational Level Of Analysismentioning
confidence: 99%
“…In contrast, Cuppini, Ursino and collaborators created a neural network based on two parallel layers representing two unisensory modalities as well as a crossmodal layer receiving inputs from the uni-sensory layers 91,100,101 . Although not explicitly encoding probabilities, the connectivity within, and across layers, produced behavior very similar to the non-linear cue combination of causal inference, with cues being pulled together when close together, but unaffected when further apart.…”
Section: Implementational Level Of Analysismentioning
confidence: 99%
“…This means the values for the hidden variables will be the same no matter what time they are observed. This model is important to many inference and decision making problems [23], [24], [25] since in many cases we have the prior knowledge where the state of the environment doesn't change or changes very slowly with respect to time [15].…”
Section: Inference Of Hidden Markov Modelsmentioning
confidence: 99%
“…When the new observation y t of the HMM comes, an external current of I k = 1 ε0τ ln p(y t |x t = x k ) is added to the input current of neuron z k in WTA circuit at time T t . The membrane potential of each spiking neuron in WTA circuit encodes the logarithm of posterior probability of the hidden variable being in each state (see equation (22)), and the firing rate of each neuron is proportional to the posterior probability of hidden variable in each state (see equation (23)). Moreover, the time course of neural firing rate can implement posterior inference of HMMs.…”
Section: Implement Inference With Spiking Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations