2019
DOI: 10.1016/j.patrec.2019.08.009
|View full text |Cite
|
Sign up to set email alerts
|

Reverse-nearest neighborhood based oversampling for imbalanced, multi-label datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 34 publications
(9 citation statements)
references
References 29 publications
0
9
0
Order By: Relevance
“…Sáez et al (2016) applied this method in analyzing class characteristics, whereby the subsets of certain instances were identified in each class and increased individually. A novel reversenearest neighborhood-based over-sampling method for the class imbalance of a multilabel dataset was introduced by Sadhukhan and Palit (2019). All those points that included the query point as one of their neighbors had the reverse nearest neighborhood of a query point.…”
Section: Related Workmentioning
confidence: 99%
“…Sáez et al (2016) applied this method in analyzing class characteristics, whereby the subsets of certain instances were identified in each class and increased individually. A novel reversenearest neighborhood-based over-sampling method for the class imbalance of a multilabel dataset was introduced by Sadhukhan and Palit (2019). All those points that included the query point as one of their neighbors had the reverse nearest neighborhood of a query point.…”
Section: Related Workmentioning
confidence: 99%
“…One way of achieving this is through the removal of points from the majority class of each label-for example using random undersampling [30] or tomek-link based undersampling [22]. Another way to achieve this is by adding synthetic minority points to the minority class [17,28,4]. Although this approaches have been shown to be effective there is still a lot of room for improvement.…”
Section: Related Workmentioning
confidence: 99%
“…In [36], authors have integrated the data gravitational model with multi-label lazy learner for improving the minority class performances of imbalanced multi-label datasets. In our very recent work [37], we have used a reverse-nearest neighborhood oversampling to curb the problem of differential imbalance in multi-label datasets. Several other techniques like convex relaxation [38], ensembles of random graph [39] and graph classification [40] are also employed to address multilabel classification.…”
Section: Related Workmentioning
confidence: 99%
“…COCOA specifically addresses class-imbalance problem in multi-label datasets. Additionally, we have also included Reverse-nearest neighborhood based oversampling for multi-label dataset (RNNOML) [37] in this empirical study. MLKNN invokes a set of k-nearest neighbor based classifiers for multi-label datasets.…”
Section: Ranking Lossmentioning
confidence: 99%