2005
DOI: 10.1109/tnn.2005.852239
|View full text |Cite
|
Sign up to set email alerts
|

SMO-Based Pruning Methods for Sparse Least Squares Support Vector Machines

Abstract: Abstract-Solutions of least squares support vector machines (LS-SVMs) are typically nonsparse. The sparseness is imposed by subsequently omitting data that introduce the smallest training errors and retraining the remaining data. Iterative retraining requires more intensive computations than training a single nonsparse LS-SVM. In this paper, we propose a new pruning algorithm for sparse LS-SVMs: the sequential minimal optimization (SMO) method is introduced into pruning process; in addition, instead of determi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 85 publications
(41 citation statements)
references
References 14 publications
0
41
0
Order By: Relevance
“…Pruning in LS-SVM is investigated using an SMO-based pruning method [226]. It requires solving a set of linear equations for pruning each sample, causing a large computational cost.…”
Section: Pruning Svmsmentioning
confidence: 99%
“…Pruning in LS-SVM is investigated using an SMO-based pruning method [226]. It requires solving a set of linear equations for pruning each sample, causing a large computational cost.…”
Section: Pruning Svmsmentioning
confidence: 99%
“…The sparseness is imposed by subsequently omitting data that introduce the smallest training errors and retraining the remaining data. In the following, the SMO-Based pruning Algorithms are given in [7]. From (5), the Karush-Kuhn-Tucker (KKT) conditions for optimality are:…”
Section: Smo-based Pruning Algorithms For Ls-svmmentioning
confidence: 99%
“…Iterative retraining requires more intensive computations than training a single non-sparse LS-SVM. This paper we use the methods proposed in [7]. Both the computational costs and regression accuracy are solved.…”
Section: Introductionmentioning
confidence: 99%
“…For more on LSSVM pruning algorithms, Hoegaerts et al [12] provided a comparison among these algorithms and concluded that pruning schemes can be divided into QR decomposition and searching feature vector. Instead of determining the pruning points by errors, Zeng and Chen [13] introduced the sequential minimal optimization method to omit the datum that will lead to minimum changes to a dual objective function. Based on kernel partial least squares identification, Song and Gui [14] presented a method to get base vectors via reducing the kernel matrix by Schmidt orthogonalization.…”
Section: Introductionmentioning
confidence: 99%