2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00069
|View full text |Cite
|
Sign up to set email alerts
|

Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition

Abstract: Many state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple (e.g., nearest centroid) classifiers. We take an approach that is agnostic to the features used, and focus exclusively on meta-learning the final classifier layer. Specifically, we introduce MetaQDA, a Bayesian meta-learning generalisation of the classic quadratic discriminant analysis. This approach has several benefits of interest to practitioners: meta-learning is fast a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 36 publications
(25 citation statements)
references
References 30 publications
0
25
0
Order By: Relevance
“…Few-shot learning methods [20,63,2,55,56,60,15,51] aim to adapt models to novel classes from a few samples from each class (assuming the classes used for training are disjoint with the novel classes seen at test time). Cross-domain few-shot learning [78,19,58] further addresses the problem when the novel classes are sampled from a different domain with different data distribution. In contrast, few-shot supervised domain adaptation aims to adapt models to new domains with the assistance of a few examples [44,57,65,45].…”
Section: Related Workmentioning
confidence: 99%
“…Few-shot learning methods [20,63,2,55,56,60,15,51] aim to adapt models to novel classes from a few samples from each class (assuming the classes used for training are disjoint with the novel classes seen at test time). Cross-domain few-shot learning [78,19,58] further addresses the problem when the novel classes are sampled from a different domain with different data distribution. In contrast, few-shot supervised domain adaptation aims to adapt models to new domains with the assistance of a few examples [44,57,65,45].…”
Section: Related Workmentioning
confidence: 99%
“…Much activity in the field falls under the umbrella of meta-learning [35], which aims to construct a data-efficient learner from the source (aka meta-train) dataset by simulating few-shot learning problems, and then deploy the customized learner on the target (aka meta-test) set. The resulting learner may take the form of an initialization [29], learned metric [59], Bayesian prior [72], or optimizer [54]. Simple-but-effective baselines In competition with the plethora of sophisticated few-shot learners [35,69] such as those mentioned above, a number of recent studies have advocated strong baselines that perform comparably well while being simpler.…”
Section: Related Workmentioning
confidence: 99%
“…How does pre-training and architecture affect fewshot learning? Learning from a few shots can be achieved by a) meta-learning [66,72] and b) transfer learning from self-supervised foundation models pre-trained on large-scale external data [18,53]. While the majority of FSL community focuses on the former, we show that the latter can be more effective because it enables the use of stronger architectures such as vision transformer (ViT) [25] -and can be combined with simple meta-learners such as ProtoNet.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations