2018
DOI: 10.1101/247460
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Focused learning promotes continual task performance in humans

Abstract: Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), whereas current machine learning systems fail. Here, we investigated the cognitive mechanisms that permit successful continual learning in humans. Unlike neural networks, humans that were trained on temporally autocorrelated task objectives (focussed training) learned to perform new tasks more effectively, and performed better on a later test involving randomly interleaved tasks. Analysis of error patterns sugges… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Along similar lines, in the realm of perceptual learning, DNNs can be tested for the same qualitative properties as observed in humans, such as increased specificity (Wenliang & Seitz, 2018). Other efforts that focused on learning dynamics have moved beyond conventional DNNs to unsupervised training (Anselmi et al, 2016;Eslami et al, 2018;Flesch, Balaguer, Dekker, Nili, & Summerfield, 2018;Lotter et al, 2017;Stoianov & Zorzi, 2012;Watanabe et al, 2018) or to architectures with strong prior knowledge (Lake et al, 2015).…”
Section: The Cognitive Scientist's Toolboxmentioning
confidence: 99%
“…Along similar lines, in the realm of perceptual learning, DNNs can be tested for the same qualitative properties as observed in humans, such as increased specificity (Wenliang & Seitz, 2018). Other efforts that focused on learning dynamics have moved beyond conventional DNNs to unsupervised training (Anselmi et al, 2016;Eslami et al, 2018;Flesch, Balaguer, Dekker, Nili, & Summerfield, 2018;Lotter et al, 2017;Stoianov & Zorzi, 2012;Watanabe et al, 2018) or to architectures with strong prior knowledge (Lake et al, 2015).…”
Section: The Cognitive Scientist's Toolboxmentioning
confidence: 99%
“…However, it can be argued that the move toward deep learning has the potential of bringing NLP back to its roots after all. Some recent activities and findings in this direction include: Techniques like multi-task learning have been used to integrate cognitive data as supervision in NLP tasks (Barrett et al, 2016); Pre-training/finetuning regimens are potentially interpretable in terms of cognitive mechanisms like general competencies applied to specific tasks (Flesch et al, 2018); The ability of modern models for 'few-shot' or even 'zero-shot' performance on novel tasks mirrors human performance (Srivastava et al, 2018); Evidence of unsupervised structure learning in current neural network architectures that mirrors classical linguistic structures (Hewitt and Manning, 2019;Tenney et al, 2019).…”
mentioning
confidence: 99%