2021
DOI: 10.1101/2021.03.21.436284
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Geometry of Concept Learning

Abstract: Understanding the neural basis of our remarkable cognitive capacity to accurately learn novel high-dimensional naturalistic concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts we can learn given few examples are defined by tightly circumscribed manifolds in the neural firing rate space o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(29 citation statements)
references
References 50 publications
0
29
0
Order By: Relevance
“…We note that recently a similar quantification of feature spaces found success in analytically estimating error rates of few-shot learning [29].…”
Section: Feature Space and Geometric Measuresmentioning
confidence: 74%
See 1 more Smart Citation
“…We note that recently a similar quantification of feature spaces found success in analytically estimating error rates of few-shot learning [29].…”
Section: Feature Space and Geometric Measuresmentioning
confidence: 74%
“…all the variance is in the first PC dimension. This measure is often used as a measure of subspace dimensionality[27][28][29].…”
mentioning
confidence: 99%
“…How do high-dimensional models achieve better classification performance for novel categories? If we consider an idealized scenario in which categories are represented by spherical or elliptical manifolds, it is a geometric fact that projections of these manifolds onto linear readout dimensions will concentrate more around their manifold centroids as dimensionality increases (assuming that manifold radius is held constant) (Gorban, Makarov, & Tyukin, 2020; Gorban & Tyukin, 2018; Sorscher et al,2021). The reason for this is that in high dimensions, most of the manifold’s mass is concentrated along its equator, orthogonal to the linear readout dimension.…”
Section: Resultsmentioning
confidence: 99%
“…Classifying novel object categories To see how model ED affected generalization to the task of classifying novel object categories, we used a transfer learning paradigm following Sorscher et al (2021). For a given model layer, we obtained activations to images from M = 50 different categories each with N train = 50 samples.…”
Section: Predicting Neural Responsesmentioning
confidence: 99%
“…The approach presented in this article can of course also be generalized to other domains and data sets such as the THINGS data base and its associated embeddings [26] or the recently published similarity ratings and embeddings for a subset of ImageNet [50]. It can furthermore be seen as a contribution to the currently emerging field of research which tries to align neural networks with psychological models of cognition [1,5,6,29,35,36,44,46,47,48,53,54,59,60].…”
Section: Discussionmentioning
confidence: 99%