2022
DOI: 10.48550/arxiv.2203.06574
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Worst Case Matters for Few-Shot Recognition

Abstract: Few-shot recognition learns a recognition model with very few (e.g., 1 or 5) images per category, and current few-shot learning methods focus on improving the average accuracy over many episodes. We argue that in real-world applications we may often only try one episode instead of many, and hence maximizing the worst-case accuracy is more important than maximizing the average accuracy. We empirically show that a high average accuracy not necessarily means a high worst-case accuracy. Since this objective is not… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
(45 reference statements)
0
1
0
Order By: Relevance
“…In our context, class imbalance refers to a trained network with near 100% accuracy on a subset of classes and poor performance on other classes. We strongly advocate in classification tasks that practitioners evaluate and analyze test accuracies for every class, rather than only the average accuracy (Smith and Conovaloff, 2020 ; Fu et al, 2022 ). However, we are the first to apply data imbalance methods to unlabeled data.…”
Section: Introductionmentioning
confidence: 99%
“…In our context, class imbalance refers to a trained network with near 100% accuracy on a subset of classes and poor performance on other classes. We strongly advocate in classification tasks that practitioners evaluate and analyze test accuracies for every class, rather than only the average accuracy (Smith and Conovaloff, 2020 ; Fu et al, 2022 ). However, we are the first to apply data imbalance methods to unlabeled data.…”
Section: Introductionmentioning
confidence: 99%