2022
DOI: 10.7554/elife.76384
|View full text |Cite
|
Sign up to set email alerts
|

Learning cortical representations through perturbed and adversarial dreaming

Abstract: Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learn… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
38
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(38 citation statements)
references
References 102 publications
0
38
0
Order By: Relevance
“…For instance, imagery mechanisms could act as the generator of quasi-perceptual experiences, while reality monitoring could serve as the discriminator to distinguish between sensory inputs from real or imagined sources (Gershman, 2019;Lau, 2019). Recent studies investigated involuntary visual experiences using generative neural network models, such as in memory replay (van de Ven, Siegelmann, & Tolias, 2020), intrusive imagery (Cushing et al, 2023), and adversarial dreaming (Deperrois, Petrovici, Senn, & Jordan, 2022). Regarding voluntary visual mental imagery, some key strategies may involve modeling the retrieval process of representations pertaining to semantic information and visual features , and incorporating biologically inspired recurrence in visual imagery processing (Lindsay, Mrsic-Flogel, & Sahani, 2022).…”
Section: Aligned Dnns May Be All We Needmentioning
confidence: 99%
“…For instance, imagery mechanisms could act as the generator of quasi-perceptual experiences, while reality monitoring could serve as the discriminator to distinguish between sensory inputs from real or imagined sources (Gershman, 2019;Lau, 2019). Recent studies investigated involuntary visual experiences using generative neural network models, such as in memory replay (van de Ven, Siegelmann, & Tolias, 2020), intrusive imagery (Cushing et al, 2023), and adversarial dreaming (Deperrois, Petrovici, Senn, & Jordan, 2022). Regarding voluntary visual mental imagery, some key strategies may involve modeling the retrieval process of representations pertaining to semantic information and visual features , and incorporating biologically inspired recurrence in visual imagery processing (Lindsay, Mrsic-Flogel, & Sahani, 2022).…”
Section: Aligned Dnns May Be All We Needmentioning
confidence: 99%
“…Third, an exciting possibility is that humans enlarge their initial dataset provided through external input by creatively using already encountered instances to create new instances during offline states-a concept similar to what in reinforcement learning is called experience replay (e.g., see Lin, 1991Lin, , 1992Mnih et al, 2015;O'Neill et al, 2010). The idea is that during imagination and dreaming, stored memories are combined to generate new training data (e.g., see Deperrois et al, 2022). Thus, additionally to the external input provided by the sensory system, an internal generative model provides the visual system with additional training data.…”
Section: Discussionmentioning
confidence: 99%
“…Our results are thus not suited to answer this question. However, recently, Deperrois et al (2022) proposed a model based on generative adversarial networks (GANs), which captures the idea of learning offline states by distinguishing between wake-states where external input is processed, and offline states where the model is trained by a generative model either by reconstructing perturbed images based on latent representations (similar to simple memory recall as during non-REM sleep) or by generating new visual sensory input based on convex combinations of multiple randomly chosen stored latent representations (similar to the rearranging of stored episodic patterns during REM sleep). Experiments with these models show that introducing such offline states increases robustness and the near separability of latent representations.…”
Section: Discussionmentioning
confidence: 99%
“…Another interesting perspective is the qualitative reorganization of memories during sleep ( 62 ). While our model focuses on synaptic plasticity and quantitative memory reorganization (i.e., consolidation vs. forgetting), a recent theory proposes that the learning cycle mimicking wakefulness, NREM sleep, and REM sleep promote the formulation of new cortical representations, not just strengthening or weakening experiences ( 63 ). Bridging synaptic plasticity rules mainly obtained in the rodent experiment and qualitative memory reorganization proposed in the cognitive study is an interesting future direction.…”
Section: Discussionmentioning
confidence: 99%