2012
DOI: 10.1007/s10618-012-0295-5
|View full text |Cite
|
Sign up to set email alerts
|

Training and assessing classification rules with imbalanced data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
294
0
6

Year Published

2014
2014
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 568 publications
(300 citation statements)
references
References 48 publications
0
294
0
6
Order By: Relevance
“…To design a robust predictive model, a balanced dataset is used to avoid possible bias caused by a majority class. To show the effectiveness of utilizing the balanced dataset, a comparative study was performed by measuring false negatives (FN) and false positives (FP) [50]. We measured Type I and II errors (i.e false positive and false negative, respectively) when using the balanced and imbalanced datasets.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To design a robust predictive model, a balanced dataset is used to avoid possible bias caused by a majority class. To show the effectiveness of utilizing the balanced dataset, a comparative study was performed by measuring false negatives (FN) and false positives (FP) [50]. We measured Type I and II errors (i.e false positive and false negative, respectively) when using the balanced and imbalanced datasets.…”
Section: Resultsmentioning
confidence: 99%
“…The predictive model with the DWT features provided outperformed results in accuracy, sensitivity, specificity, and AUC. Among the four sliding window sizes (25,50,100, and 150 data points), the 150 data points sliding window showed a better performance than others.…”
Section: Classification Performance Comparisonmentioning
confidence: 94%
“…To avoid this problem, we apply to the training dataset the sampling approach proposed in ROSE [51] that down-samples the majority class and synthesizes new examples in the minority class.…”
Section: Methodsmentioning
confidence: 99%
“…They also applied SVM-based classifiers, when the imbalance is extreme, novelty detectors are more accurate than balanced and unbalanced binary classifiers. Giovanna Menardi [12] et al, have discussed the effects of class imbalance on model training and model assessing. A unified and systematic framework for dealing with both the problems is proposed, based on a smoothed bootstrap re-sampling technique.…”
Section: Current Approaches In Decision Treesmentioning
confidence: 99%