2020
DOI: 10.1109/access.2020.2991231
|View full text |Cite
|
Sign up to set email alerts
|

Adjusting Decision Boundary for Class Imbalanced Learning

Abstract: Training of deep neural networks heavily depends on the data distribution. In particular, the networks easily suffer from class imbalance. The trained networks would recognize the frequent classes better than the infrequent classes. To resolve this problem, existing approaches typically propose novel loss functions to obtain better feature embedding. In this paper, we argue that drawing a better decision boundary is as important as learning better features. Inspired by observations, we investigate how the clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
40
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(43 citation statements)
references
References 43 publications
1
40
0
2
Order By: Relevance
“…It is usually the case that real-world data exhibit a imbalanced distribution, and highly skewed data can adversely affect the effectiveness of machine learning [10,3]. Re-sampling [11] and re-weighting [12,10,13] are traditional methods towards addressing imbalanced data. Through re-weighting strategies, modern works can make networks more sensitive to minority categories by assigning a variable weight to each class.…”
Section: Learning From Imbalanced Datamentioning
confidence: 99%
“…It is usually the case that real-world data exhibit a imbalanced distribution, and highly skewed data can adversely affect the effectiveness of machine learning [10,3]. Re-sampling [11] and re-weighting [12,10,13] are traditional methods towards addressing imbalanced data. Through re-weighting strategies, modern works can make networks more sensitive to minority categories by assigning a variable weight to each class.…”
Section: Learning From Imbalanced Datamentioning
confidence: 99%
“…Nevertheless, data resampling or loss function engineering will influence the representations of data (Ren et al 2020). Lots of empirical observations show that we can acquire good representation when using the standard training and the classifier head is the performance bottleneck (Kang et al 2019;Zhang et al 2021;Yu et al 2020;Kim and Kim 2020). To solve the above problem, decision boundary adjustment methods re-adjust the classifier head after the standard training (Kang et al 2019;Zhang et al 2021).…”
Section: Related Workmentioning
confidence: 99%
“…in [36], the authors propose an adaptive loss function that assigns weights to samples (both labeled and unlabeled) based on their importance. For the same setting, the authors of [19] analyze the entropy of the softmax function output in order to draw better decision boundaries between different classes. An approach for dealing with noisy labels was proposed by [6], who used two stacked softmax layers with the latter denoted as "noise separation layer" and used to model the noise of the original classification.…”
Section: Semi-supervised Learning Using Neural Networkmentioning
confidence: 99%