2013 IEEE 13th International Conference on Data Mining Workshops 2013
DOI: 10.1109/icdmw.2013.45
|View full text |Cite
|
Sign up to set email alerts
|

Cost-Free Learning for Support Vector Machines with a Reject Option

Abstract: In this work, we investigate into the abstaining classification of binary support vector machines (SVMs) based on mutual information (MI). We obtain the reject rule by maximizing the MI between the true labels and the predicted labels, which is a post-processing method. The gradient and Hessian matrix of MI are derived explicitly so that Newton method is used for the optimization which converges very fast. Different from the existing reject rules of SVM, the present MIbased reject rule does not require any exp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Overally, experimental results confirm the effectiveness of the ASLS loss in imbalanced classification. In the future, we will further study the optimization and sparseness problems of the ASLS loss and compare the ASLS loss to the emerging C-loss function [8] and cost-free leaning [39], [40].…”
Section: Discussionmentioning
confidence: 99%
“…Overally, experimental results confirm the effectiveness of the ASLS loss in imbalanced classification. In the future, we will further study the optimization and sparseness problems of the ASLS loss and compare the ASLS loss to the emerging C-loss function [8] and cost-free leaning [39], [40].…”
Section: Discussionmentioning
confidence: 99%
“…Xu and Hu investigate the classification theory based on mutual information (MI). 22 The authors have proposed a reject rule for the SVM based upon MI optimal. MI-optimal rejection rule is defined as…”
Section: F I G U R E 2 Sparseness Contrast Between (A) Svm and (B) Ls...mentioning
confidence: 99%
“…The authors have concluded that to avoid severe losses, which can be raised by errors, need to apply to reject option during the classification. 22 Optimization in SVMs is not enough to enhance the performance of classification. 22 Emerging the other optimization techniques like particle swarm optimization (PSO) in SVM may enhance the performance.…”
Section: F I G U R E 2 Sparseness Contrast Between (A) Svm and (B) Ls...mentioning
confidence: 99%
See 2 more Smart Citations