2021
DOI: 10.1101/2021.11.24.469830
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The functional role of episodic memory in spatial learning

Abstract: Episodic memory has been studied extensively in the past few decades, but so far little is understood about how it is used to affect behavior. Here we postulate three learning paradigms: one-shot learning, replay learning, and online learning, where in the first two paradigms episodic memory is retrieved for decision-making or replayed to the neocortex for extracting semantic knowledge, respectively. In the third paradigm, the neocortex directly extracts information from online experiences as they occur, but d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 63 publications
0
5
0
Order By: Relevance
“…As part of the Gym Interface, a DQN agent implemented using Keras/Tensorflow receives the processed image observation, the instant reward and then takes an action, deciding whether the artificial agent should move forward, turn left, etc. The DQN agents can utilize memory replay to speed up learning and to study the role of short-term memory and episode replay in spatial navigation tasks (Zeng et al, 2022 ). It is possible to turn the memory on or off, limit its size, or change the statistics of how memories are replayed to model the effect of manipulating episodic memory (Diekmann and Cheng, 2022 ).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As part of the Gym Interface, a DQN agent implemented using Keras/Tensorflow receives the processed image observation, the instant reward and then takes an action, deciding whether the artificial agent should move forward, turn left, etc. The DQN agents can utilize memory replay to speed up learning and to study the role of short-term memory and episode replay in spatial navigation tasks (Zeng et al, 2022 ). It is possible to turn the memory on or off, limit its size, or change the statistics of how memories are replayed to model the effect of manipulating episodic memory (Diekmann and Cheng, 2022 ).…”
Section: Resultsmentioning
confidence: 99%
“…In this paper, we introduced CoBeL-RL, a RL framework oriented toward computational neuroscience, which provides a large range of environments, established RL models and analysis tools, and can be used to simulate a variety of behavioral tasks. Already, a set of computational studies focusing on explaining animal behavior (Walther et al, 2021 ; Zeng et al, 2022 ) as well as neural activity (Diekmann and Cheng, 2022 ; Vijayabaskaran and Cheng, 2022 ) have employed predecessor versions of CoBeL-RL. The framework has been expanded and refined since these earlier studies.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, there is a framework for simulating closed-loop behavior based on reinforcement learning that is oriented towards neuroscience (CoBeL-RL) (Walther et al, 2021). While this allows for studying the computational solutions that emerge under different constraints, such as place-cell like representations Vijayabaskaran and Cheng (2022), how the statistics of hippocampal replay emerges in the network Diekmann and Cheng (2022), or the function of memory replay Zeng, Wiskott, and Cheng (2022), these studies were based on machine learning implementations of reinforcement learning and neural networks. Even though they help us understand the abstract computations performed by the brain during spatial navigation, the internal processes of this model significantly differ from those in the brain.…”
Section: Discussionmentioning
confidence: 99%
“…In this paper, we introduced CoBeL-RL, a RL framework oriented towards computational neuroscience, which provides a large range of environments, established RL models and analysis tools, and can be used to simulate a variety of behavioral tasks. Already, a set of computational studies focusing on explaining animal behavior (Walther et al, 2021; Zeng et al, 2021) as well as neural activity (Diekmann and Cheng, 2022; Vijayabaskaran and Cheng, 2022) have employed predecessor versions of CoBeL-RL. The framework has been expanded and refined since these earlier studies.…”
Section: Discussionmentioning
confidence: 99%
“…As part of the Gym Interface, a DQN agent implemented using Keras/Tensorflow receives the processed image observation, the instant reward and then takes an action, deciding whether the artificial agent should move forward, turn left, etc. The DQN agents can utilize memory replay to speed up learning and to study the role of short-term memory and episode replay in spatial navigation tasks (Zeng et al, 2021). It is possible to turn the memory on or off, limit its size, or change the statistics of how memories are replayed to model the effect of manipulating episodic memory (Diekmann and Cheng, 2022).…”
Section: Cobel-rlmentioning
confidence: 99%