2016
DOI: 10.1101/058545
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards an integration of deep learning and neuroscience

Abstract: Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
37
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(38 citation statements)
references
References 244 publications
(469 reference statements)
1
37
0
Order By: Relevance
“…On the other hand, the function of network nodes and circuits as well as their biophysical attributes likely depend critically upon the scale at which a network is constructed and analyzed. Accordingly, we might also expect networks to be optimized to perform scale-specific functions (Marblestone et al, 2016), and studying a particular scale gives us a unique insight into the network architecture underpinning those functions. Ultimately, network neuroscience will need both approaches—an understanding of network function and organization at specific scales, as well as a map that bridges multiple different spatial, temporal, and topological scales.…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, the function of network nodes and circuits as well as their biophysical attributes likely depend critically upon the scale at which a network is constructed and analyzed. Accordingly, we might also expect networks to be optimized to perform scale-specific functions (Marblestone et al, 2016), and studying a particular scale gives us a unique insight into the network architecture underpinning those functions. Ultimately, network neuroscience will need both approaches—an understanding of network function and organization at specific scales, as well as a map that bridges multiple different spatial, temporal, and topological scales.…”
Section: Discussionmentioning
confidence: 99%
“…Anatomically, the policy networks may correspond to circuits in dorsolateral prefrontal cortex, while the value networks may correspond to circuits in the orbitofrontal cortex (Schultz et al, 2000;Takahashi et al, 2011) or basal ganglia (Hikosaka et al, 2014). This architecture also provides a useful example of the hypothesis that various areas of the brain effectively optimize different cost functions (Marblestone et al, 2016): in this case, the policy network maximizes reward, while the value network minimizes the prediction error for future reward.…”
Section: Discussionmentioning
confidence: 99%
“…For this related role, the most common approach thus far has been to construct various categorization engines, interpreting real-time neurally-generated signals [35]. Most significantly, recent advances in deep learning [36] build on earlier neural network architectures [37,38,39] and use brain architecture-based principles to solve very computationally tough problems.…”
Section: Introductionmentioning
confidence: 99%