2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01168
|View full text |Cite
|
Sign up to set email alerts
|

Equalization Loss for Long-Tailed Object Recognition

Abstract: Long-tail distribution is widely spread in real-world applications. Due to the extremely small ratio of instances, tail categories often show inferior accuracy. In this paper, we find such performance bottleneck is mainly caused by the imbalanced gradients, which can be categorized into two parts: (1) positive part, deriving from the samples of the same category, and (2) negative part, contributed by other categories. Based on comprehensive experiments, it is also observed that the gradient ratio of accumulate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
331
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 399 publications
(332 citation statements)
references
References 69 publications
1
331
0
Order By: Relevance
“…In addition, , is the one-hot ground-truth label. However, the models based on softmax CE loss often suffer from inferior classification performance, especially for monitory classes, due to the imbalanced data distribution [ 23 ]. Therefore, we further introduced an effective loss function to supervise the training of CMI-Net and alleviate the class imbalance problem, namely, CB focal loss.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, , is the one-hot ground-truth label. However, the models based on softmax CE loss often suffer from inferior classification performance, especially for monitory classes, due to the imbalanced data distribution [ 23 ]. Therefore, we further introduced an effective loss function to supervise the training of CMI-Net and alleviate the class imbalance problem, namely, CB focal loss.…”
Section: Methodsmentioning
confidence: 99%
“…However, oversampling and undersampling come with high potential risks of overfitting and information loss, respectively [ 21 ]. Reweighting is more flexible and convenient by directly assigning a weight for the loss function per training sample to alleviate the sensitivity of the model to data distribution [ 23 ]. This method is further divided into class-level and sample-level reweighting.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In 2019, Cao et al alternatively studied the minimum margin per class and designed a label-distribution-aware loss function that encourages a model to have the optimal trade-off between per-class margins [28]. Tan et al proposed equalization loss to tackle the problem of rare long-tailed categories by ignoring the gradients for rare categories [29]. In recent years, all of these methods have become popular reweighting methods.…”
Section: Related Work a Information Imbalance In Deep Learningmentioning
confidence: 99%
“…In the final layer of a typical classification module, the representative operation, e.g., SoftMax, is usually used for computing a probability distribution over all classes. In other words, the probabilities across all classes compete against each other and optimization might be biased depending on the category frequency [39]. In the case of HSCN, given the hierarchically annotated dataset where there are hierarchical relationships among classes, we partition the global competition over all classes into the local competitions between sibling classes.…”
Section: Clustering-guided Cropping Strategymentioning
confidence: 99%