2018
DOI: 10.1016/j.neuron.2018.05.013
|View full text |Cite
|
Sign up to set email alerts
|

Neurocomputational Dynamics of Sequence Learning

Abstract: The brain is often able to learn complex structures of the environment using a very limited amount of evidence, which is crucial for model-based planning and sequential prediction. However, little is known about the neurocomputational mechanisms of deterministic sequential prediction, as prior work has primarily focused on stochastic transition structures. Here we find that human subjects' beliefs about a sequence of states, captured by reaction times, are well explained by a Bayesian pattern-learning model th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
29
0

Year Published

2018
2018
2025
2025

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 38 publications
(32 citation statements)
references
References 73 publications
3
29
0
Order By: Relevance
“…These aspects of behaviour were well accounted for by the normative two-system model , thereby showing that subjects use a common probabilistic currency to arbitrate between the different hypotheses, in line with previous accounts of Bayesian inference applied to discrete states 22,31,68 . We also explored alternative possibilities in the form of the non-commensurable two-system model which computes pseudo posterior probabilities of the non-random hypotheses independently of each other and, hence, cannot normatively compare them.…”
Section: Discussionsupporting
confidence: 83%
“…These aspects of behaviour were well accounted for by the normative two-system model , thereby showing that subjects use a common probabilistic currency to arbitrate between the different hypotheses, in line with previous accounts of Bayesian inference applied to discrete states 22,31,68 . We also explored alternative possibilities in the form of the non-commensurable two-system model which computes pseudo posterior probabilities of the non-random hypotheses independently of each other and, hence, cannot normatively compare them.…”
Section: Discussionsupporting
confidence: 83%
“…For example, Bornstein and Daw (2012) employed a perceptual sequence learning task with an underlying probabilistic structure and found that the BG response to each stimulus was correlated with its forward entropy (the probability distribution of the possible next stimuli), whereas they suggested that hippocampal activity was related to “preparatory ‘prefetching’ of the anticipated next elements in the sequence” (specific predictions of sequence continuations; p. 1020). Using a similar sequence learning paradigm Konovalov and Krajbich (2018) found that the caudate response was related to prediction error; that is, violations of expectancy (which they align with Bornstein and Daw's forward entropy), whereas they suggest that the hippocampal response was related to pattern encoding (but not retrieval, which seems to contradict Bornstein and Daw). Wang, Shen, Tino, Welchman, and Kourtzi (2017) used a probabilistic system and the participants’ task was to make explicit predictions about sequence continuations with no feedback.…”
Section: Discussionmentioning
confidence: 90%
“…Neural network representations are often described as encoding latent information from a corpus of data (1)(2)(3)(4)(5)(6)(7)(8)(9). Similarly, the brain forms representations to help it overcome a formidable challenge: to organize episodes, tasks and behavior according to a priori unkown latent variables underlying the experienced sensory information.…”
Section: Introductionmentioning
confidence: 99%