2019
DOI: 10.48550/arxiv.1907.06374
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

What does it mean to understand a neural network?

Abstract: We can define a neural network that can learn to recognize objects in less than 100 lines of code. However, after training, it is characterized by millions of weights that contain the knowledge about many object types across visual scenes. Such networks are thus dramatically easier to understand in terms of the code that makes them than the resulting properties, such as tuning or connections. In analogy, we conjecture that rules for development and learning in brains may be far easier to understand than their … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…To have a model complex enough to perform real-world tasks, we must sacrifice the desire to make simple statements about how each stage of it works---a goal inherent in much of systems neuroscience. For this reason, an alternative way to describe the network that is compact without relying on simple statements about computations has been proposed [97,126]; this viewpoint focuses on describing the architecture, optimization function, and learning algorithm of the network---instead of attempting to describe specific computations--because specific computations and representations can be seen as simply emerging from these three factors.…”
Section: Are They Understandable?mentioning
confidence: 99%
“…To have a model complex enough to perform real-world tasks, we must sacrifice the desire to make simple statements about how each stage of it works---a goal inherent in much of systems neuroscience. For this reason, an alternative way to describe the network that is compact without relying on simple statements about computations has been proposed [97,126]; this viewpoint focuses on describing the architecture, optimization function, and learning algorithm of the network---instead of attempting to describe specific computations--because specific computations and representations can be seen as simply emerging from these three factors.…”
Section: Are They Understandable?mentioning
confidence: 99%
“…To understand a neural network [44] means not only correct coding and using proper weights: considering the timing relations properly are at least as crucial. The larger the systems, the more crucial.…”
Section: Discussionmentioning
confidence: 99%
“…For example, it is not understood whether the ability to decode movement patterns from activity in motor cortex is because neurons have been found which are designed to encode movement, or because their activity is highly correlated with movement as a side-effect of their true functional roles. Most dimensionality reduction techniques are agnostic to this distinction, yet hopefully we can distill neural activity down to a space that reveals its true underlying causes (Lillicrap & Kording, 2019). Moving forward, careful design of experiments that can modify neural response properties, either through either direct manipulation (Gradinaru et al, 2010;Roth, 2016) or new learning paradigms (Shenoy & Kao, 2021), are needed to better interpret neural representations, and understand how circuit and systems-level mechanisms modulate or change the distribution of the neural state space.…”
Section: Challengesmentioning
confidence: 99%