2021
DOI: 10.20944/preprints202106.0718.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

High-Dimensional Separability for One- And Few-Shot Learning

Abstract: This work is driven by a practical question, corrections of Artificial Intelligence (AI) errors. Systematic re-training of a large AI system is hardly possible. To solve this problem, special external devices, correctors, are developed. They should provide quick and non-iterative system fix without modification of a legacy AI system. A common universal part of the AI corrector is a classifier that should separate undesired and erroneous behavior from normal operation. Training of such classifiers is a grand ch… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 58 publications
0
4
0
Order By: Relevance
“…However, comprehensive theoretical justification of these schemes is yet to be seen. Recent work [7], [8] suggested a new framework offering a pathway for understanding of fewshot learning. Instead of focusing on classical ideas rooted in empirical risk minimisation coupled with distribution-agnostic bounds, it explores the interplay between the geometry of feature spaces and concentration of measure phenomena [9].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, comprehensive theoretical justification of these schemes is yet to be seen. Recent work [7], [8] suggested a new framework offering a pathway for understanding of fewshot learning. Instead of focusing on classical ideas rooted in empirical risk minimisation coupled with distribution-agnostic bounds, it explores the interplay between the geometry of feature spaces and concentration of measure phenomena [9].…”
Section: Introductionmentioning
confidence: 99%
“…In this work we adopt the theoretical framework proposed in [7], [8] and generalise it beyond the original setting whereby the problem of few-shot learning is analysed in models' native feature spaces. Here we explore how the problem of few-shot learning changes if one allows a nonlinear transformation of these features.…”
Section: Introductionmentioning
confidence: 99%
“…Although recent work [3] provided new relevant insights explaining the coexistence of both generalisation and overfitting, it does not address the challenge of learning from low volumes of data. Another relevant approach has been developed in [4], driven by a need to identify and correct errors made by modern high dimensional AI systems. Rather than retraining the underlying system, which may be prohibitively expensive and runs the risk of catastrophically forgetting previous training, the focus is on building simple auxiliary systems to correct [4] or add functionality to existing AI systems.…”
Section: Introductionmentioning
confidence: 99%
“…Another relevant approach has been developed in [4], driven by a need to identify and correct errors made by modern high dimensional AI systems. Rather than retraining the underlying system, which may be prohibitively expensive and runs the risk of catastrophically forgetting previous training, the focus is on building simple auxiliary systems to correct [4] or add functionality to existing AI systems. It has been proven under certain assumptions that this approach is effective: with high probability the preexisting knowledge of the underlying system is retained and utilised when appropriate, while the new functionality is effectively learned.…”
Section: Introductionmentioning
confidence: 99%