2021
DOI: 10.1109/access.2020.3046604
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Kernel Learning With Minority Oversampling for Classifying Imbalanced Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…Other ways of oversampling include, but are not limited to, the work of [91,92,93,94,78,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119] The validation process is what all oversampling methods have in common, which is basically the evaluation of the classifier's performance employed to classify the oversampled datasets using one or more accuracy measures such as Accuracy, Precision, Recall, F-measure, G-mean, Specificity, Kappa, Matthews correlation coefficient (MCC), Area under the ROC Curve (AUC), True positive rate, False negative (FN), False positive (FP), True positive (TP), True negative (TN), and ROC curve. Table 1 lists 72 oversampling methods, including their known names, references, the number of datasets utilized, the number of classes in these datasets, the classifiers employed, and the performance metrics used to validate the classification results after oversampling.…”
Section: Literature Review Of Oversampling Methodsmentioning
confidence: 99%
“…Other ways of oversampling include, but are not limited to, the work of [91,92,93,94,78,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119] The validation process is what all oversampling methods have in common, which is basically the evaluation of the classifier's performance employed to classify the oversampled datasets using one or more accuracy measures such as Accuracy, Precision, Recall, F-measure, G-mean, Specificity, Kappa, Matthews correlation coefficient (MCC), Area under the ROC Curve (AUC), True positive rate, False negative (FN), False positive (FP), True positive (TP), True negative (TN), and ROC curve. Table 1 lists 72 oversampling methods, including their known names, references, the number of datasets utilized, the number of classes in these datasets, the classifiers employed, and the performance metrics used to validate the classification results after oversampling.…”
Section: Literature Review Of Oversampling Methodsmentioning
confidence: 99%
“…Other ways of oversampling include, but are not limited to, the work of [91,92,93,94,78,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119] The validation process is what all oversampling methods have in common, which is basically the evaluation of the classifier's performance employed to classify the oversampled datasets using one or more accuracy measures such as Accuracy, Precision, Recall, F-measure, G-mean, Specificity, Kappa, Matthews correlation coefficient (MCC), Area under the ROC Curve (AUC), True positive rate, False negative (FN), False positive (FP), True positive (TP), True negative (TN), and ROC curve. Table 1 lists 72 oversampling methods, including their known names, references, the number of datasets utilized, the number of classes in these datasets, the classifiers employed, and the performance metrics used to validate the classification results after oversampling.…”
Section: Literature Review Of Oversampling Methodsmentioning
confidence: 99%
“…When there are irregularities in the imbalanced data (such as small disjuncts, overlapping, and noise [30]) and the data scale is large, applying a single kernel may make the model biased, skew, or misleading. Inspired by the MKL algorithm [31], we construct a low rank approximate multiple kernel framework as follows:…”
Section: Proposed Algorithmsmentioning
confidence: 99%
“…We divide the imbalance dataset D ={( x i , y i )} i =1 n into the minority class set D + ={( x i , +1)} i =1 n + and the majority class set D − ={( x i , −1)} i =1 n − . When there are irregularities in the imbalanced data (such as small disjuncts, overlapping, and noise [ 30 ]) and the data scale is large, applying a single kernel may make the model biased, skew, or misleading. Inspired by the MKL algorithm [ 31 ], we construct a low rank approximate multiple kernel framework as follows: where corresponds to the rank- k approximation of each base kernel matrix K m , and d m is the corresponding mixture weight.…”
Section: Proposed Algorithmsmentioning
confidence: 99%