2021
DOI: 10.1038/s41598-021-81157-z
|View full text |Cite
|
Sign up to set email alerts
|

Context-dependent extinction learning emerging from raw sensory inputs: a reinforcement learning approach

Abstract: The context-dependence of extinction learning has been well studied and requires the hippocampus. However, the underlying neural mechanisms are still poorly understood. Using memory-driven reinforcement learning and deep neural networks, we developed a model that learns to navigate autonomously in biologically realistic virtual reality environments based on raw camera inputs alone. Neither is context represented explicitly in our model, nor is context change signaled. We find that memory-intact agents learn di… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 42 publications
0
12
0
Order By: Relevance
“…All simulations were carried out in virtual environments using the CoBeL-RL (Closed-loop simulator of complex behavior and learning based on reinforcement learning and deep neural networks) modeling framework [ 31 ]. In the guidance task, the agent started from a random location and had to navigate to a fixed, unmarked goal location within a square environment of size 2.75m × 2.75m, unless otherwise specified.…”
Section: Methodsmentioning
confidence: 99%
“…All simulations were carried out in virtual environments using the CoBeL-RL (Closed-loop simulator of complex behavior and learning based on reinforcement learning and deep neural networks) modeling framework [ 31 ]. In the guidance task, the agent started from a random location and had to navigate to a fixed, unmarked goal location within a square environment of size 2.75m × 2.75m, unless otherwise specified.…”
Section: Methodsmentioning
confidence: 99%
“…In this paper, we introduced CoBeL-RL, a RL framework oriented toward computational neuroscience, which provides a large range of environments, established RL models and analysis tools, and can be used to simulate a variety of behavioral tasks. Already, a set of computational studies focusing on explaining animal behavior (Walther et al, 2021 ; Zeng et al, 2022 ) as well as neural activity (Diekmann and Cheng, 2022 ; Vijayabaskaran and Cheng, 2022 ) have employed predecessor versions of CoBeL-RL. The framework has been expanded and refined since these earlier studies.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, CoBeL-RL has been used to understand the emergence of other spatial representations, e.g., head direction modulated cells, and their dependence on navigational strategy employed (Vijayabaskaran and Cheng, 2022 ). The initial version of the framework was used to analyze representational changes resulting from the learning of context-specific behavior in an extinction learning paradigm (Walther et al, 2021 ). CoBeL-RL currently only provides a small repertoire for the analysis of network activity with a focus on spatial representations.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations