2022
DOI: 10.48550/arxiv.2211.03607
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards a mathematical understanding of learning from few examples with nonlinear feature maps

Abstract: We consider the problem of data classification where the training set consists of just a few data points. We explore this phenomenon mathematically and reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities. The main thrust of our analysis is to reveal the influence on the model's generalisation capabilities of nonlinear feature transformations mapping the original data into high, and possibly… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 11 publications
0
1
0
Order By: Relevance
“…The problem of learning from a small number of examples is strongly related to the recently discovered phenomenon of dimensionality blessing (Gorban et al, 2016) as opposed to the "curse of dimensionality" (Sutton et al, 2022). The connection between these two problems has a fundamental mathematical nature (Gorban and Tyukin, 2017).…”
mentioning
confidence: 99%
“…The problem of learning from a small number of examples is strongly related to the recently discovered phenomenon of dimensionality blessing (Gorban et al, 2016) as opposed to the "curse of dimensionality" (Sutton et al, 2022). The connection between these two problems has a fundamental mathematical nature (Gorban and Tyukin, 2017).…”
mentioning
confidence: 99%