2021
DOI: 10.1101/2021.07.29.454337
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Building integrated representations through interleaved learning

Abstract: Neural representations can be characterized as falling along a continuum, from distributed representations, in which neurons are responsive to many related features of the environment, to localist representations, where neurons orthogonalize activity patterns despite any input similarity. Distributed representations support powerful learning in neural network models and have been posited to exist throughout the brain, but it is unclear under what conditions humans acquire these representations and what computa… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
33
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3

Relationship

3
3

Authors

Journals

citations
Cited by 15 publications
(35 citation statements)
references
References 81 publications
2
33
0
Order By: Relevance
“…Our simulations demonstrate how the hippocampus can autonomously teach the neocortex new information during NREM sleep, and how alternating between NREM and REM sleep over the course of the night can promote graceful integration of new information into existing cortical knowledge. The hippocampus is implemented as our C-HORSE model, which is able to quickly learn new categories and statistics in the environment, in addition to individual episodes (29, 30, 47). We extend the model with a neocortical area that serves as the target of consolidation and with a sleep environment that allows learning and dynamics between these areas to unfold autonomously offline.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Our simulations demonstrate how the hippocampus can autonomously teach the neocortex new information during NREM sleep, and how alternating between NREM and REM sleep over the course of the night can promote graceful integration of new information into existing cortical knowledge. The hippocampus is implemented as our C-HORSE model, which is able to quickly learn new categories and statistics in the environment, in addition to individual episodes (29, 30, 47). We extend the model with a neocortical area that serves as the target of consolidation and with a sleep environment that allows learning and dynamics between these areas to unfold autonomously offline.…”
Section: Discussionmentioning
confidence: 99%
“…The hippocampus module in our framework is a version of our C-HORSE model (29, 30, 47), which rapidly learns both new statistical and episodic information, in its MSP and TSP, respectively. The original CLS framework proposed that the hippocampus handles the rapid synaptic changes involved in encoding new episodes, in order to prevent the interference that would occur from attempting to make these synaptic changes directly in neocortex.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Nonetheless, many solutions that do yield near-perfect performance on both premise and inferred discriminations do exist (see https://osf.io/ps3ch/ for a scripted example). With more hidden units the networks were capable of learning increasingly sparse representations of each discrimination, see [42,43]. For example, with 6 hidden units, a specific hidden unit could uniquely activate the target output for each trained discrimination.…”
Section: Resultsmentioning
confidence: 99%