The 2013 International Joint Conference on Neural Networks (IJCNN) 2013
DOI: 10.1109/ijcnn.2013.6707105
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing F-measure with non-convex loss and sparse linear classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…Optimizing the F-measure is another popular method for imbalance learning. Joachims [15], Chinta et al [20], Maratea et al [21], and Lipton et al [22] used different approximates to the F-measure and designed different classifies. Numerical experiments on the benchmark datasets demonstrated their algorithms' effectiveness.…”
Section: Relevant Backgroundmentioning
confidence: 99%
See 2 more Smart Citations
“…Optimizing the F-measure is another popular method for imbalance learning. Joachims [15], Chinta et al [20], Maratea et al [21], and Lipton et al [22] used different approximates to the F-measure and designed different classifies. Numerical experiments on the benchmark datasets demonstrated their algorithms' effectiveness.…”
Section: Relevant Backgroundmentioning
confidence: 99%
“…The second group algorithm-oriented methods aim at the extension and modification of existing classification algorithms so that they can be more effective in dealing with imbalanced data. For example, Liu et al and Kang and Ramamohanarao have presented two different modified decision tree algorithms for improving the standard C4.5, such as CCPDT [11] and HeDEx [12], while Köknar-Tezel et al, Joachims et al, and Lipton et al have proposed various approaches to improve traditional SVM's performance on the imbalanced settings [13][14][15][16][17][18][19][20][21][22].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, various approximation algorithms have been proposed, which mainly fall into two paradigms [8]. The Empirical Utility Maximization (EUM) approach learns a classifier having optimal performance on training data [9][10][11][12][13][14][15][16], while the decision-theoretic (DT) approach learns a probabilistic model and then predicts labels with maximum expected -measure [17][18][19][20]. Since, in this paper, our aim is to design an efficient classifier for maximizing -measure, and DT approach possibly needs high computational complexity for the prediction step [8], in the following, we are focused on the Empirical Utility Maximization approach.…”
Section: Introductionmentioning
confidence: 99%
“…As the -measure is a nonconvex metric, EUM approach often designs convex surrogates for optimizing -measure and results in the development of two types of methods. The first type belongs to the "direct method," which directly defines different surrogate objective functions for maximizing -measure [9][10][11][12][13][14][15]. One representative work is SVMperf, which adopts structural SVM as surrogate framework, and uses cutting plane algorithm to solve the inner optimization [11].…”
Section: Introductionmentioning
confidence: 99%