2021
DOI: 10.48550/arxiv.2104.03066
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distributional Robustness Loss for Long-tail Learning

Abstract: Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes. To address unbalanced data, most studies try balancing the data, the loss, or the classifier to reduce classification bias towards head classes. Far less attention has been given to the latent representations learned with unbalanced data. We show that the feature extractor part of deep networks suffers greatly from this bias. We propose a new loss based on robustness theo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 35 publications
0
12
0
Order By: Relevance
“…We further demonstrate our model's ability to integrate new, previously unseen classes with strong recognition accuracy, and provide an analysis of our model training strategy so as to facilitate knowledge transfer. Nonetheless, our approach is not tied to a specific backbone training approach, and further benefits could be obtained by combining our strategy with backbone training methods comprising auxiliary losses that optimise for class separability [7,26].…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We further demonstrate our model's ability to integrate new, previously unseen classes with strong recognition accuracy, and provide an analysis of our model training strategy so as to facilitate knowledge transfer. Nonetheless, our approach is not tied to a specific backbone training approach, and further benefits could be obtained by combining our strategy with backbone training methods comprising auxiliary losses that optimise for class separability [7,26].…”
Section: Discussionmentioning
confidence: 99%
“…It has been shown that backbone capacity and additional training losses can substantially improve backbone quality and consequently classification performance [7,15,26]. As our approach is independent from the backbone training process, we train our model using the setting that matches most pre-existing approaches.…”
Section: Comparison To State Of the Art Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…After that, some decoupling methods [21,54] also can be regarded as transferring head classes frozen feature to tail classes when fine-tuning classifiers. Recently, some studies [7,20,39,46,52] also transfer the representation learned by contrastive learning or selfsupervised learning for long-tailed problems.…”
Section: Long-tailed Visual Recognitionmentioning
confidence: 99%