2017
DOI: 10.1016/j.patrec.2016.11.016
|View full text |Cite
|
Sign up to set email alerts
|

Ramp Loss based robust one-class SVM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
23
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(23 citation statements)
references
References 19 publications
0
23
0
Order By: Relevance
“…Zhu et al [31] and Cha et al [32] employed nearest neighbor information to compute the relevant weights. Furthermore, Xiao et al [34] recently introduced the ramp loss function into model training to effectively limit the negative influence of anomalies.…”
Section: Robustnessmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhu et al [31] and Cha et al [32] employed nearest neighbor information to compute the relevant weights. Furthermore, Xiao et al [34] recently introduced the ramp loss function into model training to effectively limit the negative influence of anomalies.…”
Section: Robustnessmentioning
confidence: 99%
“…For the generation of model diversity, we improve the OCSVM as a base classifier. The general equation of the OCSVM [34] is as follows:…”
Section: ) Problem Analysismentioning
confidence: 99%
“…Experimental evaluations on several data sets demonstrate that the our proposed ensemble loss function significantly improves the performance of a simple regressor in comparison with state-of-the-art methods.Loss functions are fundamental components of machine learning systems and are used to train the parameters of the learner model. Since standard training methods aim to determine the parameters that minimize the average value of the loss given an annotated training set, loss functions are crucial for successful trainings [49,55]. Bayesian estimators are obtained by minimizing the expected loss function.…”
mentioning
confidence: 99%
“…The 0-1 loss function is known as a robust loss because it assigns value 1 to all misclassified samples -including outliers -and thus an outlier does not influence the decision function, leading to a robust learner. On the other hand, the 0-1 loss penalizes all misclassified samples equally with value 1, and since it does not enhance the margin, it cannot be an appropriate choice for applications with margin importance [49].The Ramp loss function, as another type of loss functions, is defined similarly to the 0-1 loss function with the only difference that ramp loss functions also penalize some correct samples, those with small margins. This minor difference makes the Ramp loss function appropriate for applications with margin importance [43,49].…”
mentioning
confidence: 99%
See 1 more Smart Citation