2021
DOI: 10.1016/j.neuron.2020.11.021
|View full text |Cite
|
Sign up to set email alerts
|

Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
44
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(55 citation statements)
references
References 59 publications
3
44
1
Order By: Relevance
“…The segmentation results of the MSCC-MDF established in this research were compared with those of CNN [ 21 ], fully CNN (FCN) [ 22 ], SegNet [ 23 ], and deep Q-network (DQN) [ 24 ] ( Figure 9 ). The Dice coefficient of the MSCC-MDF model was lower than other algorithms, and the difference was considerable ( P < 0.05).…”
Section: Resultsmentioning
confidence: 99%
“…The segmentation results of the MSCC-MDF established in this research were compared with those of CNN [ 21 ], fully CNN (FCN) [ 22 ], SegNet [ 23 ], and deep Q-network (DQN) [ 24 ] ( Figure 9 ). The Dice coefficient of the MSCC-MDF model was lower than other algorithms, and the difference was considerable ( P < 0.05).…”
Section: Resultsmentioning
confidence: 99%
“…Reinforcement learning is a particular area of machine learning concerned with how intelligent agents should select actions in order to maximize a prescribed reward through trial-and-error interactions within a dynamic environment 42 . DRL is the combination of reinforcement learning and deep learning to make these trial-and-error decisions 43 . At the mesoscopic scale detailed below, model agents seek to have a desired phenotype at each simulation time step.…”
Section: Methodsmentioning
confidence: 99%
“…An emerging body of work has used DRL to train ANNs that solve tasks closely inspired by tasks from neuroscience. For instance, agents have been trained to study learning and dynamics in the motor-cortex [Song et al, 2020;Weinstein and Botvinick, 2017], time encoding in the hippocampus [Lin and Richards, 2021], rewardbased learning and meta-learning in the pre-frontal cortex [Botvinick et al, 2019;Song et al, 2017;Wang et al, 2018], and task-associated representations across multiple brain areas [Cross et al, 2021]. There have been several recent perspectives articulating the relevance of this emerging algorithmic toolkit to neuroscience [Botvinick et al, 2020;Gershman and Ölveczky, 2020] and ethology [Crosby, 2020].…”
Section: Related Workmentioning
confidence: 99%