2021
DOI: 10.7554/elife.71801
|View full text |Cite
|
Sign up to set email alerts
|

Gated recurrence enables simple and accurate sequence prediction in stochastic, changing, and structured environments

Abstract: From decision making to perception to language, predicting what is coming next is crucial. It is also challenging in stochastic, changing, and structured environments; yet the brain makes accurate predictions in many situations. What computational architecture could enable this feat? Bayesian inference makes optimal predictions but is prohibitively difficult to compute. Here, we show that a specific recurrent neural network architecture enables simple and accurate solutions in several environments. This archit… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(20 citation statements)
references
References 144 publications
(172 reference statements)
2
18
0
Order By: Relevance
“…Although flat learning via the RW rule has proven very valuable to describe human (and animal) learning (Glimcher, 2011; Rescorla & Wagner, 1972; Steinberg et al, 2013), a wide range of previous work has argued that flat learning is insufficient to capture human learning in complex environments (Bai et al, 2014; Bouchacourt et al, 2022; Liu et al, 2022; McGuire et al, 2014; Verbeke & Verguts, 2019). Therefore, several hierarchical extensions to the flat learning approach have been proposed in several different environments and data sets (Bai et al, 2014; Behrens et al, 2007; Foucault & Meyniel, 2021; Kruschke, 2008; Mathys et al, 2011; Silvetti et al, 2011; Verbeke et al, 2021). Crucially, an extensive and systematic evaluation of these hierarchical extensions over multiple reinforcement learning environments was lacking.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although flat learning via the RW rule has proven very valuable to describe human (and animal) learning (Glimcher, 2011; Rescorla & Wagner, 1972; Steinberg et al, 2013), a wide range of previous work has argued that flat learning is insufficient to capture human learning in complex environments (Bai et al, 2014; Bouchacourt et al, 2022; Liu et al, 2022; McGuire et al, 2014; Verbeke & Verguts, 2019). Therefore, several hierarchical extensions to the flat learning approach have been proposed in several different environments and data sets (Bai et al, 2014; Behrens et al, 2007; Foucault & Meyniel, 2021; Kruschke, 2008; Mathys et al, 2011; Silvetti et al, 2011; Verbeke et al, 2021). Crucially, an extensive and systematic evaluation of these hierarchical extensions over multiple reinforcement learning environments was lacking.…”
Section: Discussionmentioning
confidence: 99%
“…As a result, one could infer from context which rule set to switch to (Botvinick et al, 2009; Collins & Frank, 2013; Eckstein & Collins, 2020). Additionally, learning on the hierarchical level could allow adapting to changing levels of noise in reward feedback and hence learning when to switch (Foucault & Meyniel, 2021; Verbeke & Verguts, 2019; L. Q. Yu et al, 2021).…”
Section: Hierarchical Extensions To the Flat Modelmentioning
confidence: 99%
“…Models of the first class presuppose that hidden outcome contingencies jump covertly from one state to another (Figure 2, bottom). An ideal observer model for such environments must consider the possibility that each observed data point reflects a changepoint (i.e., that each possible dish was produced by a new chef) and maintain a probability distribution over these discrete possibilities (Figure 1), making optimal inference computationally demanding and intractable for most practical applications (Adams & MacKay, 2007;Foucault & Meyniel, 2021;Wilson et al, 2010). The reduced Bayesian model (RBM) developed by Nassar et al (2010) strongly reduces such demands and considers only the possibility that a changepoint did and did not occur in the most recent trial, thereby indicating the likelihood that the environment just changed.…”
Section: Environmental Changes Elevate Estimation Uncertaintymentioning
confidence: 99%
“…However, one issue of normative Bayesian models is that they are often computationally complex or not tractable at all (FeldmanHall & Nassar, 2021;Foucault & Meyniel, 2021;Mathys et al, 2011;Nassar et al, 2010). Therefore, applying such models to study learning and decision making under uncertainty often requires computationally more efficient model versions.…”
mentioning
confidence: 99%
“…A second level of adaptive learning corresponds to dynamically adjusting the learning rate from one observation to the next depending on what is observed-we refer to it as dynamic adaptive learning . Such dynamic adjustments are particularly critical to learn effectively in a dynamic and stochastic environment, so as to increase the learning rate locally when a single change point is detected (Foucault & Meyniel, 2021;Nassar et al, 2010) . Compared to the first level (different average learning rates between blocks of trials), less is known in humans about this second level (dynamic adjustments of the learning rate at the trial level).…”
Section: Introductionmentioning
confidence: 99%