2017
DOI: 10.1016/j.patcog.2017.03.011
|View full text |Cite
|
Sign up to set email alerts
|

Support vector machine classifier with truncated pinball loss

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 119 publications
(47 citation statements)
references
References 11 publications
0
47
0
Order By: Relevance
“…The penalty is fixed at κ for t < −κ and is linear otherwise. SVM with such loss ( Tpin -SVM) can be referred in [14].…”
Section: Non-convex Soft-margin Lossesmentioning
confidence: 99%
“…The penalty is fixed at κ for t < −κ and is linear otherwise. SVM with such loss ( Tpin -SVM) can be referred in [14].…”
Section: Non-convex Soft-margin Lossesmentioning
confidence: 99%
“…The basic idea of SVM is to maximize the distance between wo classes. The distance is traditionally determined by the closest point [37]. Some effectiveness has been shown in study on pattern recognition [38].…”
Section: Support Vector Machine (Svm)mentioning
confidence: 99%
“…Cover and Hart proposed the first KNN in 1968, which is a classification algorithm. KNN belongs to instance-based learning and is a type of lazy learning; thus, KNN has no explicit learning process or training phase [13]. KNN data sets already have prior classification and feature values and they can be directly processed after receiving new samples.…”
Section: Related Workmentioning
confidence: 99%