2010
DOI: 10.1007/s11063-010-9162-9
|View full text |Cite
|
Sign up to set email alerts
|

First and Second Order SMO Algorithms for LS-SVM Classifiers

Abstract: Least squares support vector machine (LS-SVM) classifiers have been traditionally trained with conjugate gradient algorithms. In this work, completing the study by Keerthi et al., we explore the applicability of the SMO algorithm for solving the LS-SVM problem, by comparing First Order and Second Order working set selections concentrating on the RBF kernel, which is the most usual choice in practice. It turns out that, considering all the range of possible values of the hyperparameters, Second Order working se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 14 publications
0
18
0
Order By: Relevance
“…Exploiting the fact that LSSVM, RLS and kernel FDA are equivalent (Rifkin, 2002;Gestel et al, 2002;Keerthi and Shevade, 2003), sequential minimal optimisation (SMO) techniques (Joachims, 1988) developed for LSSVM (Keerthi and Shevade, 2003;Lopez and Suykens, 2011) can be employed to remedy these problems. This effectively leads to an interleaved algorithm that is similar to Algorithm 2 in Kloft et al (2011), but applies to square loss instead of to hinge loss.…”
Section: Interleaved Optimisation Of the Saddle Point Problemmentioning
confidence: 99%
See 2 more Smart Citations
“…Exploiting the fact that LSSVM, RLS and kernel FDA are equivalent (Rifkin, 2002;Gestel et al, 2002;Keerthi and Shevade, 2003), sequential minimal optimisation (SMO) techniques (Joachims, 1988) developed for LSSVM (Keerthi and Shevade, 2003;Lopez and Suykens, 2011) can be employed to remedy these problems. This effectively leads to an interleaved algorithm that is similar to Algorithm 2 in Kloft et al (2011), but applies to square loss instead of to hinge loss.…”
Section: Interleaved Optimisation Of the Saddle Point Problemmentioning
confidence: 99%
“…Such an interleaved optimisation strategy allows for a very cheap update of a minimal subset of the dual variables α k in each α step, without having to have access to the whole kernel matrices, and as a result extends the applicability of MK-FDA to large scale problems. We omit details of the resulting interleaved MK-FDA algorithm, the interested reader is referred to Keerthi and Shevade (2003) and Lopez and Suykens (2011).…”
Section: Interleaved Optimisation Of the Saddle Point Problemmentioning
confidence: 99%
See 1 more Smart Citation
“…These algorithms achieve low complexity, but their solutions are not sparse. In [121], the applicability of SMO is explored for solving the LS-SVM problem, by comparing first-order and second-order working-set selections concentrating on the RBF kernel. Second-order working-set selection is more convenient than first-order one.…”
Section: Least-squares Svmsmentioning
confidence: 99%
“…Here, they selected the pair that has the most violation on the KKT optimality conditions that is the maximum violating pair (MVP). Second order SMO algorithm for LS-SVM [22] uses second order approximation information of the dual function [23], where the first index is selected using the same method as in the first order SMO algorithm [18], but the second index is selected using a more accurate method so that the number of iterations of the second order SMO algorithm is much smaller than that of the first order SMO algorithm. It has been shown that LS-SVM can be simplified further and extended to general practical applications (regression, binary and multiclass classifications) without much changes with the unified ELM solution [24][25][26][27].…”
Section: Introductionmentioning
confidence: 99%