2016 IEEE 16th International Conference on Data Mining (ICDM) 2016
DOI: 10.1109/icdm.2016.0143
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing the Multiclass F-Measure via Biconcave Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(16 citation statements)
references
References 6 publications
0
16
0
Order By: Relevance
“…In the classification problem, class imbalance is common because most datasets do not contain exactly equal number of instances in each class, and this phenomenon also applies to human activities during daily lives. F1-score is a commonly used measure in the class imbalanced settings [41]. In the multi-class classification domain, however, micro-averaging F1-score is equivalent to accuracy which measures a ratio of correctly predicted observation to the total instances.…”
Section: Discussionmentioning
confidence: 99%
“…In the classification problem, class imbalance is common because most datasets do not contain exactly equal number of instances in each class, and this phenomenon also applies to human activities during daily lives. F1-score is a commonly used measure in the class imbalanced settings [41]. In the multi-class classification domain, however, micro-averaging F1-score is equivalent to accuracy which measures a ratio of correctly predicted observation to the total instances.…”
Section: Discussionmentioning
confidence: 99%
“…Sensitivity, specificity and predictive accuracy of the decision algorithms (A) and (B) were calculated. As decision tree C is a multiclass predictor, sensitivity and specificity were not directly defined for it, and the corresponding mean sensitivity, mean specificity, and mean accuracy were calculated as previously described [19]. These quality measures were obtained as follows: For every multiclass predictor, which differentiates between n classes, n binary sub-predictors were defined.…”
Section: Discussionmentioning
confidence: 99%
“…For simplicity in evaluation, we calculate metrics for each class and only report the macro-averaged values over all classes. We prefer macro average in unbalanced settings because it calculate the unweighted mean and has equal emphasis on all classes [45]. When only one metric is considered for evaluation, we use F1 score by default.…”
Section: ) Evaluation Metricsmentioning
confidence: 99%