2018
DOI: 10.1101/471987
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predictive learning as a network mechanism for extracting low-dimensional latent space representations

Abstract: Neural networks have achieved many recent successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task's low-dimensional latent structure in the network activity -i.e., in the learned neural representations. Similarly, biological neural circuits and in particular the hippocampus may produce representations that organize semantically related episodes. Here, we investigate the hypothesis that representations with low-dimensional latent structure, ref… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(32 citation statements)
references
References 58 publications
(62 reference statements)
3
29
0
Order By: Relevance
“…We first consider a network that uses inhibitory feedback to cancel the predictable aspects of its input. This is in line with models of predictive coding (5860). We then consider a linear-nonlinear mapping that provides a prediction of y ( θ ) from a partially corrupted readout, using this signal to retrain readout weights.…”
Section: Resultssupporting
confidence: 88%
“…We first consider a network that uses inhibitory feedback to cancel the predictable aspects of its input. This is in line with models of predictive coding (5860). We then consider a linear-nonlinear mapping that provides a prediction of y ( θ ) from a partially corrupted readout, using this signal to retrain readout weights.…”
Section: Resultssupporting
confidence: 88%
“…In other words, while sequences encode short-term predictions, maps encode long-term, average predictions of future locations. Our view is then compatible with the idea tat the hippocampus learns a predictive representation 75 but proposes that it is hierarchical.…”
Section: Discussionsupporting
confidence: 54%
“…Second, using different dimensions for different reach conditions can allow learning of multiple behaviors without catastrophic interference 47 , which is otherwise a major challenge for neural networks. Third, the use of distant portions of a curved manifold can reduce interference between conditions 48 , while reaping the benefit of strong generalization due to local linearity 49,50 . Finally, by allowing rotations to pivot in higher-dimensional space, the system can control each output dimension with much greater flexibility than if limited to planar rotational dynamics.…”
Section: Discussionmentioning
confidence: 99%