2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00264
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale Long-Tailed Recognition in an Open World

Abstract: Real world data often have a long-tailed and open-ended distribution. A practical recognition system must classify among majority and minority classes, generalize from a few known instances, and acknowledge novelty upon a never seen instance. We define Open Long-Tailed Recognition (OLTR) as learning from such naturally distributed data and optimizing the classification accuracy over a balanced test set which include head, tail, and open classes.OLTR must handle imbalanced classification, few-shot learning, and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
894
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 941 publications
(898 citation statements)
references
References 51 publications
4
894
0
Order By: Relevance
“…In practice, we first train expert models using ordinary Instance-level Random Sampling, where each instance is sampled with equal probability. We then train the whole LFME using Class-level Random Sampling adopted in [30,24], where each class is sampled with equal number of samples and probability.…”
Section: Trainingmentioning
confidence: 99%
See 3 more Smart Citations
“…In practice, we first train expert models using ordinary Instance-level Random Sampling, where each instance is sampled with equal probability. We then train the whole LFME using Class-level Random Sampling adopted in [30,24], where each class is sampled with equal number of samples and probability.…”
Section: Trainingmentioning
confidence: 99%
“…Dataset We evaluate our proposed method on three benchmark long-tailed classification datasets: ImageNet-LT, Places-LT proposed in [30] and CIFAR100-LT proposed in [2]. ImageNet-LT is created by sampling a subset of ImageNet [6] following the Pareto distribution with power value α = 6.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 2 more Smart Citations
“…We use the method of memory feature to improve the performance of detection by enhancing the feature of low-frequency relations. Inspired by human observation of objects, when humans encounter an object, they will compare it with the objects in memory and then identify them, we used visual relation memory [15,19] to transfer information in different relations and enrich the features of low-frequency relations. The visual relation memory M stores the prototypes of each visual relation.…”
Section: Introductionmentioning
confidence: 99%