2017
DOI: 10.7554/elife.22901
|View full text |Cite
|
Sign up to set email alerts
|

Towards deep learning with segregated dendrites

Abstract: Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

11
387
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 358 publications
(398 citation statements)
references
References 74 publications
(238 reference statements)
11
387
0
Order By: Relevance
“…Recently, several researchers have proposed alternatives to the well-adopted backpropagation algorithm in ANN training based on this mechanism. [266,274] Using multi-terminal Ag-based CBRAM memristor, the extra terminal with electrical bias is shown to mimic the use of external variables to modulate its plasticity and can be wired into more complicated circuits to potentially mimic such control mechanisms in the brain in the future. [275]…”
Section: Control Of Plasticitymentioning
confidence: 99%
“…Recently, several researchers have proposed alternatives to the well-adopted backpropagation algorithm in ANN training based on this mechanism. [266,274] Using multi-terminal Ag-based CBRAM memristor, the extra terminal with electrical bias is shown to mimic the use of external variables to modulate its plasticity and can be wired into more complicated circuits to potentially mimic such control mechanisms in the brain in the future. [275]…”
Section: Control Of Plasticitymentioning
confidence: 99%
“…Here again, there has been significant progress toward algorithms that can assign and propagate credits in more biologically palatable forms. 137,141,142 Another widely successful approach to tuning weights is via reinforcement learning. 143 Reinforcement learning algorithms have demonstrated seemingly magical performance in tasks, such as learning how to play games like chess, Go, or different types of video games, even beating world champions.…”
Section: Learning and Plasticitymentioning
confidence: 99%
“…While some of these studies have been performed using spiking networks, they still use effectively a rate-based approach in which a given input activity vector is interpreted as the firing rate of a set of input neurons (Eliasmith et al., 2012; Diehl & Cook, 2015; Guergiuev et al., 2016; Neftci et al., 2016; Mesnard, Gerstner, & Brea, 2016). While this approach is appealing because it can often be related directly to equivalent rate-based models with stationary neuronal transfer functions, it also largely ignores the idea that individual spike timing may carry additional information that could be crucial for efficient coding (Thalmeier, Uhlmann, Kappen, & Memmesheimer, 2016; Denève & Machens, 2016; Abbott, DePasquale, & Memmesheimer, 2016; Brendel, Bourdoukan, Vertechi, Machens, & Denéve, 2017) and fast computation (Thorpe, Fize, & Marlot, 1996; Gollisch & Meister, 2008).…”
Section: Introductionmentioning
confidence: 99%