Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02. 2002
DOI: 10.1109/iconip.2002.1201951
|View full text |Cite
|
Sign up to set email alerts
|

Critical support vector machine without kernel function

Abstract: The drawback of SVM technique leads to a positive semidefinite quadratic programming problem with a dense, structured, positive semi-definite matrix, and also requires a set of kernel functions. We propose the learning algorithms that do not need any kemel functions. The separability is based on the critical Support vectors (CSV) essential to determine the locations of all separating hyperplanes. The algorithms give better performance compared with the other proposed SVM-based algorithms when they are tested w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 14 publications
0
4
0
Order By: Relevance
“…One of the earliest works reported in a disease prediction in 2002, [42], used critical SVM without kernel function to a number of benchmark datasets. The proposed algorithm is applied on PIDD as well where reported accuracy is 82.3% without any cross-validation on the PIDD dataset.…”
Section: Disease Prediction Using Other Learnersmentioning
confidence: 99%
“…One of the earliest works reported in a disease prediction in 2002, [42], used critical SVM without kernel function to a number of benchmark datasets. The proposed algorithm is applied on PIDD as well where reported accuracy is 82.3% without any cross-validation on the PIDD dataset.…”
Section: Disease Prediction Using Other Learnersmentioning
confidence: 99%
“…In a support vector machine (SVM), the input space is mapped into a highdimensional feature space, and since the mapping function is not explicitly treated by the kernel trick, usually the SVM is trained in the dual form. And many training methods have been developed [1][2][3][4][5][6][7][8][9]. But because the coefficient vector of the hyperplane is expressed by the kernel expansion, substituting the kernel expansion to the coefficient vector, the SVM in the primal form can be solvable.…”
Section: Introductionmentioning
confidence: 99%
“…In a support vector machine (SVM), the input space is mapped into a highdimensional feature space, and since the mapping function is not explicitly treated by the kernel trick, usually the SVM is trained in the dual form. And many training methods have been developed [1][2][3][4][5][6][7][8][9]. But because the coefficient vector of the hyperplane is expressed by the kernel expansion, substituting the kernel expansion to the coefficient vector, the SVM in the primal form can be solvable.…”
Section: Introductionmentioning
confidence: 99%