2018
DOI: 10.1101/338947
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks

Abstract: Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise? Here, we propose that localized receptive fields emerge in similarity-preserving networks of rectifying neurons that learn low-dimensional manifolds populated by sensory inputs. Numerical simulations of such networks on standard datasets yield manifold-tiling lo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
54
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 26 publications
(58 citation statements)
references
References 29 publications
4
54
0
Order By: Relevance
“…Further supporting evidence was recently brought by several works, focusing on the production of such receptive fields in the context of unsupervised learning. Learning of symmetric data with similarity-preserving representations [10] or with auto-encoders [11] both led to localized receptive fields tiling the underlying manifold, in striking analogy with place cells and spatial maps in the hippocampus. In turn, such high-dimensional place-cell-like representations have putative functional advantages: they can be efficiently and accurately learned by recurrent neural networks, and thus allow for the storage and retrieval of multiple cognitive low-dimensional maps [12].The present work is an additional effort to investigate this issue in a highly simplified and idealized framework of unsupervised learning, where both the data distribution and the machine are under full control.…”
mentioning
confidence: 99%
“…Further supporting evidence was recently brought by several works, focusing on the production of such receptive fields in the context of unsupervised learning. Learning of symmetric data with similarity-preserving representations [10] or with auto-encoders [11] both led to localized receptive fields tiling the underlying manifold, in striking analogy with place cells and spatial maps in the hippocampus. In turn, such high-dimensional place-cell-like representations have putative functional advantages: they can be efficiently and accurately learned by recurrent neural networks, and thus allow for the storage and retrieval of multiple cognitive low-dimensional maps [12].The present work is an additional effort to investigate this issue in a highly simplified and idealized framework of unsupervised learning, where both the data distribution and the machine are under full control.…”
mentioning
confidence: 99%
“…To solve the objective (17) in the online setting, we introduce the constraints in the cost via Lagrange multipliers and using the variable substitution trick, we can derive a NN implementation of this algorithm [31] (Fig. 4A).…”
Section: A Similarity-based Cost Function and Nn For Clusteringmentioning
confidence: 99%
“…, T , m being the number of output channels, or hidden units in our two-layer network, Figure 1, left. Manifold-tiling networks have been derived [8] from similarity-preserving objectives [5] with a non-negativity constraint. Similarity preservation postulates that similar input pairs, x t and x t , evoke similar output pairs, h t and h t .…”
Section: Review Of the Manifold-tiling Network Derived Frommentioning
confidence: 99%
“…with the same update for w as in Eq. (8). The behavior of both algorithms is almost indistinguishable, so we only report the results from Eq.…”
Section: A Neural Network For Semi-supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation