2013
DOI: 10.1016/j.neucom.2012.11.030
|View full text |Cite
|
Sign up to set email alerts
|

A fast iterative single data approach to training unconstrained least squares support vector machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…Model selection is to seek proper values of hyper-parameters commonly by means of cross-validation and grid search [32]. The k-fold cross-validation [12,13] partitions the training data into k disjoint subsets of approximately equal size.…”
Section: Kernel Function and Model Selectionmentioning
confidence: 99%
“…Model selection is to seek proper values of hyper-parameters commonly by means of cross-validation and grid search [32]. The k-fold cross-validation [12,13] partitions the training data into k disjoint subsets of approximately equal size.…”
Section: Kernel Function and Model Selectionmentioning
confidence: 99%
“…Note that in deriving the exact bound in Appendix A, we assumed that the separating hyperplane u T x + v = 0 correctly separates the linearly separable training points; consequently, no other constraints are present in the optimization problem (6).…”
Section: The Linear Minimal Complexity Machinementioning
confidence: 99%
“…We begin with the optimization problem in (6), which was obtained from the exact bound on γ derived in Appendix A. In deriving the exact bound in Appendix A, we assumed that the separating hyperplane u T x + v = 0 correctly separates the linearly separable training points; hence, no other constraints are present in the optimization problem (6). For the convenience of the reader, (6) [also (48)] is reproduced below.…”
Section: Appendix B the Hard Margin MCM Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…The solution of LS-SVM lacks sparseness, which causes very slow test speed. There are some fast algorithms for LS-SVM, such as a CG algorithm [34,176] and an SMO algorithm [95], and a coordinate-descent algorithm [112]. These algorithms achieve low complexity, but their solutions are not sparse.…”
Section: Least-squares Svmsmentioning
confidence: 99%