2020
DOI: 10.1101/2020.01.16.908889
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The hippocampal formation as a hierarchical generative model supporting generative replay and continual learning

Abstract: We advance a novel computational theory of the hippocampal formation as a hierarchical generative model that organizes sequential experiences, such as rodent trajectories during spatial navigation, into coherent spatiotemporal contexts. We propose that to make this possible, the hippocampal generative model is endowed with strong inductive biases to pattern-separate individual items of experience (at the first hierarchical layer), organize them into sequences (at the second layer) and then cluster them into ma… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
1
1

Relationship

4
3

Authors

Journals

citations
Cited by 19 publications
(31 citation statements)
references
References 136 publications
0
31
0
Order By: Relevance
“…they are never directly experienced. An insight from recent advances in machine learning is that in order to better generalize to the environment, it is beneficial to explore these unvisited subspaces (in machine learning terms, this is known as "generative replay" (Shin et al, 2017;Stoianov et al, 2020;van de Ven et al, 2020). It is possible that dreams reflect this exploration, where a point in this internal model is activated and, via feedback connections to cortex, drives the AIZ of L5p cells ( Figure 4 , purple lines).…”
Section: Apical Drive and Dream Characteristicsmentioning
confidence: 99%
“…they are never directly experienced. An insight from recent advances in machine learning is that in order to better generalize to the environment, it is beneficial to explore these unvisited subspaces (in machine learning terms, this is known as "generative replay" (Shin et al, 2017;Stoianov et al, 2020;van de Ven et al, 2020). It is possible that dreams reflect this exploration, where a point in this internal model is activated and, via feedback connections to cortex, drives the AIZ of L5p cells ( Figure 4 , purple lines).…”
Section: Apical Drive and Dream Characteristicsmentioning
confidence: 99%
“…This error-correction mechanism that eventually converges to the correct hypothesis (e.g. I am in environment 1) when prediction errors generated by one of the two competing memories are minimized [113,121,122]. In other words, reactivating spatial representations of different maps permits using them as alternative hypotheses to be tested against sensory cues and the competition settles the generative model in one hypothesis or the other.…”
Section: A Generative Modelling Perspective On Flickering and Stochasmentioning
confidence: 97%
“…During offline periods, bottom-up stimuli are weak or absent; they cannot elicit prediction errors to correct priors, which are therefore continuously reiterated. Hence, spontaneous activity may reflect the recirculation (or resampling) of the model's priors, or the spatiotemporal patterns acquired during exposition to external stimuli [13,57,[70][71][72]. This hypothesis explains the resemblance (at the level of average statistics) between brain activations during spontaneous and evoked cortical activity [27,[32][33][34][35]37,[73][74][75][76] but it does not fully specify the content of the priors themselves.…”
Section: Inferring Generic and Low-dimensional Spatiotemporal Priors mentioning
confidence: 99%
“…The second kind of optimization consists in generating (sampling) fictive data from the model probability distribution, and then using these fictive data, as they were real data, to optimize the same or other models [71,112]. This method was used in an early algorithm to train unsupervised generative models: wake-sleep.…”
Section: Learning Simpler and More Accurate Generative Models Withoutmentioning
confidence: 99%
See 1 more Smart Citation