2020
DOI: 10.7554/elife.50469
|View full text |Cite
|
Sign up to set email alerts
|

Temporal chunking as a mechanism for unsupervised learning of task-sets

Abstract: Depending on environmental demands, humans can learn and exploit multiple concurrent sets of stimulus-response associations. Mechanisms underlying the learning of such task-sets remain unknown. Here we investigate the hypothesis that task-set learning relies on unsupervised chunking of stimulus-response associations that occur in temporal proximity. We examine behavioral and neural data from a task-set learning experiment using a network model. We first show that task-set learning can be achieved provided the … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 106 publications
0
11
0
Order By: Relevance
“…Hierarchies exist beyond the somewhat simple learning of compositional sequences, and it is expected that hierarchical models share common basic features despite solving highly distinct problems. For instance, a recent example of a hierarchical model for working memory uses two different networks: an associative network and a task-set network [ 56 ]. In our setting, the associative network could be identified with the motifs (fast clock+read-out) whereas the task-set network would correspond to the syntax (slow clock+interneurons).…”
Section: Discussionmentioning
confidence: 99%
“…Hierarchies exist beyond the somewhat simple learning of compositional sequences, and it is expected that hierarchical models share common basic features despite solving highly distinct problems. For instance, a recent example of a hierarchical model for working memory uses two different networks: an associative network and a task-set network [ 56 ]. In our setting, the associative network could be identified with the motifs (fast clock+read-out) whereas the task-set network would correspond to the syntax (slow clock+interneurons).…”
Section: Discussionmentioning
confidence: 99%
“…Another intriguing possibility is that unsupervised processes facilitate continual learning in biological systems by clustering neural representations according to their context. Hebbian learning might encourage the formation of orthogonal neural codes for different temporally contexts 147 , which in turn allows tasks to be learned in different neural subspaces 146 . The curious phenomenon of "representational drift" (where neural codes meander unpredictably over time) 148 might reflect the allocation of information to different neural circuits in distinct contexts, allowing for task knowledge to be partitioned in a way that minimises interference 149 .…”
Section: Neural Resource Allocation During Task Learningmentioning
confidence: 99%
“…However, brains are able to perform multiple tasks in the same environment. Those tasks often involve sequential behaviour at multiple timescales ( Bouchacourt et al, 2019 ). Pursuing goals sometimes requires following a sequence of subroutines, with short-term/interim objectives, themselves divided into elemental skills ( Botvinick et al, 2019 ).…”
Section: A Model-free Agent Using An Actor–critic Architecturementioning
confidence: 99%