2021
DOI: 10.1038/s41467-021-21696-1
|View full text |Cite
|
Sign up to set email alerts
|

Predictive learning as a network mechanism for extracting low-dimensional latent space representations

Abstract: Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 58 publications
(49 citation statements)
references
References 70 publications
6
43
0
Order By: Relevance
“…Both of these models are abstract, as we are not primarily concerned with the architecture of the predictive model, only its overall behavior. We first consider a network that uses inhibitory feedback to cancel the predictable aspects of its input, in line with models of predictive coding ( 67 69 ). We then consider a linear–nonlinear mapping that provides a prediction of from a partially corrupted readout.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Both of these models are abstract, as we are not primarily concerned with the architecture of the predictive model, only its overall behavior. We first consider a network that uses inhibitory feedback to cancel the predictable aspects of its input, in line with models of predictive coding ( 67 69 ). We then consider a linear–nonlinear mapping that provides a prediction of from a partially corrupted readout.…”
Section: Resultsmentioning
confidence: 99%
“…Some theories propose that neural populations retain a latent state that is used to predict future inputs ( 67 69 ). This prediction is compared to incoming information to generate a prediction error, which is fed back through recurrent interactions to update the latent state.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Soft modes: Trained systems often show 'soft' modes even if training did not explicitly seek such softness (16,17,145,146). Soft modes correspond to normal modes with low energy eigenvalue in equilibrium systems, or more generally, a small Lyapunov exponent.…”
Section: Network Dynamicsmentioning
confidence: 99%
“…However these methods are usually based on probabilistic models, incorporate latent dynamics, and allow for more nonlinear relationships between the latent factors and activity. See: Paninski and Cunningham [2018], Linderman and Gershman [2017], Whiteway and Butts [2019], Hurwitz et al [2021], Recanatesi et al [2021] Representational Similarity Analyses. While these methods have been popular as a means of comparing ANNs to neural activity, they can also be applied to look for theoretically-motivated encoding schemes or compare across different neural populations within a single system, thereby providing insights into how information is transformed.…”
Section: The Toolboxmentioning
confidence: 99%