2013
DOI: 10.1016/j.patcog.2013.01.006
|View full text |Cite
|
Sign up to set email alerts
|

A new framework for optimal classifier design

Abstract: The use of alternative measures to evaluate classifier performance is gaining attention, specially for imbalanced problems. However, the use of these measures in the classifier design process is still unsolved. In this work we propose a classifier designed specifically to optimize one of these alternative measures, namely, the so-called F-measure. Nevertheless, the technique is general, and it can be used to optimize other evaluation measures. An algorithm to train the novel classifier is proposed, and the num… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…variants of the SVM learning algorithm (based on the maximum-margin approach) [18,11,32,24,25], whose objective function is (except for [18]) a convex approximation of an F measure; optimization algorithms whose objective function is a non-convex approximation [9,17,20]; algorithms that tune the decision thresholds of binary classifiers [6,26,27,22,13,12]; and cost-sensitive algorithms [23,13].…”
Section: Empirical Utility Maximization Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…variants of the SVM learning algorithm (based on the maximum-margin approach) [18,11,32,24,25], whose objective function is (except for [18]) a convex approximation of an F measure; optimization algorithms whose objective function is a non-convex approximation [9,17,20]; algorithms that tune the decision thresholds of binary classifiers [6,26,27,22,13,12]; and cost-sensitive algorithms [23,13].…”
Section: Empirical Utility Maximization Approachmentioning
confidence: 99%
“…In [9] and [17] learning algorithms that maximize continuous but non-convex approximations of F b were proposed, using numerical optimization techniques.…”
Section: Single-label F Measurementioning
confidence: 99%
See 1 more Smart Citation
“…As proposed in [14] we attempt to maximize F meassure , and also monitor the values of Precision and Recall. We make the analysis for the default value beta equal to one, which translates in a commitment to equality between the Precision and Recall.…”
Section: Performance Measurementioning
confidence: 99%
“…Once the the candidates detection has been performed, the number of true polyps was much lower than the number of non-polyps patches, a relation on the order of 500:1, which is a significant problem for the learning stage of the classifier, since most classifiers are designed to maximize the accuracy, which is not adequate for imbalanced problems [7]. For instance, if we classify all candidates as "nonpolyps," we would get an accuracy of 99.8% but without detecting any polyps.…”
Section: Classificationmentioning
confidence: 99%