2019
DOI: 10.1101/598789
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A divisive model of evidence accumulation explains uneven weighting of evidence over time

Abstract: SummaryDivisive normalization has long been used to account for computations in various neural processes and behaviours. The model proposes that inputs into a neural system are divisively normalized by the total activity of the system. More recently, dynamical versions of divisive normalization have been shown to account for how neural activity evolves over time in value-based decision making. Despite its ubiquity, divisive normalization has not been studied in decisions that r… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 50 publications
1
3
0
Order By: Relevance
“…1f). Thus, context-dependent temporal dynamics of evidence accumulation differed markedly across individual rats, and even within individual rats for the two different types of evidence, a result consistent with single-context findings in humans 6,7 . What underlies this variability across individuals?…”
Section: Individual Variability Of Behavioral Kernelssupporting
confidence: 78%
“…1f). Thus, context-dependent temporal dynamics of evidence accumulation differed markedly across individual rats, and even within individual rats for the two different types of evidence, a result consistent with single-context findings in humans 6,7 . What underlies this variability across individuals?…”
Section: Individual Variability Of Behavioral Kernelssupporting
confidence: 78%
“…Representing this quantity in logarithmic form allows it to be implemented as a successive summation (Eqn. 8c), which can naturally be implemented by neurons (up to normalization constraints, see (Keung et al, 2020) . Bogacz et al (2006) rearranged these terms to denote the logLR as integrated evidence ( I t ) and show that the summation is a recursion which takes the form of a discrete random walk (with stochasticity inherent in the densities given by the evidences e t ):…”
Section: Mathematical Models Of Sequential Inference: Gaussian and "Jump" Diffusionmentioning
confidence: 99%
“…Gold and Shadlen (2001) proposed that neural circuits could implement evidence accumulation by computing this product in log space. Representing this quantity in logarithmic form allows it to be implemented as a successive summation (Equation 8c), which can naturally be implemented by neurons (up to normalization constraints, see Keung et al (2020).…”
Section: Mathematical Models Of Sequential Inference: Gaussian and "J...mentioning
confidence: 99%
“…In the first model, based on approximate Bayesian inference, the primacy effect produced by bottom-up vs. top-down hierarchical dynamics, was modulated by the stimulus properties which could yield different PK time-courses, a prediction that was tested in a visual discrimination task 41 . The second study proposed a model that can produce different PK time-courses by adjusting the time scales of a divisive normalization mechanism, which yields primacy, and a leak mechanism, which promotes recency 42 . In addition, this model can also account for bump shaped PKs, a class of PK that was found together with primacy, recency and flat PKs, in a study carried out using a large cohort of subjects (>100) 43 .…”
Section: Discussionmentioning
confidence: 99%