2019
DOI: 10.1109/access.2019.2914465
|View full text |Cite
|
Sign up to set email alerts
|

Robust Rescaled Hinge Loss Twin Support Vector Machine for Imbalanced Noisy Classification

Abstract: Support vector machine (SVM) and twin SVM (TWSVM) are sensitive to the noisy classification, due to the unlimited measures in their losses, especially for imbalanced classification problem. In this paper, by combining the advantages of the correntropy induced loss function (C-Loss) and the hinge loss function (hinge loss), we introduce the rescaled hinge loss function (Rhinge loss), which is a monotonic, bounded, and nonconvex loss, into TWSVM for imbalanced noisy classification, called RTBSVM. We show that th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 50 publications
0
13
0
Order By: Relevance
“…First, the objective function is constructed using the similarity of current sample pairs and the Hamming distance relationship combined with the minimum loss variance function [36], and the gradient descent algorithm is used to solve the hash projection vector; the objective function is further generalized to make similar or unsimilar sample data pairs. e Hamming distance is minimized (maximized) to reduce the update redundancy caused by the update mechanism; finally, the hinge loss function [37] is used to filter the hash map with the largest error, and the iterative calculation is reentered until the number of iterations reaches the set value.…”
Section: Related Workmentioning
confidence: 99%
“…First, the objective function is constructed using the similarity of current sample pairs and the Hamming distance relationship combined with the minimum loss variance function [36], and the gradient descent algorithm is used to solve the hash projection vector; the objective function is further generalized to make similar or unsimilar sample data pairs. e Hamming distance is minimized (maximized) to reduce the update redundancy caused by the update mechanism; finally, the hinge loss function [37] is used to filter the hash map with the largest error, and the iterative calculation is reentered until the number of iterations reaches the set value.…”
Section: Related Workmentioning
confidence: 99%
“…Twin SVM (TWSVM) has been proposed with a rescaled hinge (Rhinge) by [68] in coping with large datasets with noise and unbalanced data while maintaining the generalization ability. They tested the performance of the proposed TWSVM and Rhinge with benchmark datasets from UCI.…”
Section: Loss Function Related Workmentioning
confidence: 99%
“…The Least Square loss, i.e., L LS (u) := u 2 , is considered in [8] via a different LCQP formulation than the Hinge loss case. The Correntropy loss is defined in [9]; various compositions of this with other loss functions can be investigated, e.g., [10] for the SVM case and [11] for the imbalanced TWSVM case. One could understand the advantages of nonlinear convex loss functions for other classification methods in [12], where once again the Hinge loss is shown to be the tightest margin-based upper bound of the mis-classification loss for many classification problems.…”
Section: Background and Problem Definitionmentioning
confidence: 99%
“…The first approach to robust SVM is the use of different loss functions, with some examples given in [11], [5], [6], [7], [8], [9], [10] where the model robustness is probed by data contamination concerning the features and/or labels. The second approach relaxes the constraints of the standard optimization problem and has (2.2) rewritten as a Chance Constrained (CC) instance and/or relies on Robust optimization (RO).…”
Section: Robust Svmmentioning
confidence: 99%