2008
DOI: 10.3182/20080706-5-kr-1001.01191
|View full text |Cite
|
Sign up to set email alerts
|

Improved Training of An optimal Sparse Least Squares Support Vector Machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2008
2008
2013
2013

Publication Types

Select...
3
3

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Arbitrarily given for the th hidden node, the number of FPOs to evaluate the cost function and the number of FPOs to compute both the gradient vector and the Hessian matrix are given as follows: (53) Then, the number of FPOs for an epoch of search is (54) where denotes the average number of trial tests for the optimal update step.…”
Section: A Stage I-continuous Forward Rbf Neural Modelingmentioning
confidence: 99%
See 1 more Smart Citation
“…Arbitrarily given for the th hidden node, the number of FPOs to evaluate the cost function and the number of FPOs to compute both the gradient vector and the Hessian matrix are given as follows: (53) Then, the number of FPOs for an epoch of search is (54) where denotes the average number of trial tests for the optimal update step.…”
Section: A Stage I-continuous Forward Rbf Neural Modelingmentioning
confidence: 99%
“…If the kernel function is chosen to be a Gaussian, then the hypotheses in SVMs are RBF networks [52]. For some SVM variants such as the least squares SVMs [49], the design of Gaussian RBF kernel is vital in improving the sparseness of the solution [22], [23], [50], [53].…”
Section: Introductionmentioning
confidence: 99%
“…Equation (3) shows that an LS-SVM can also be viewed as a ridge regression model. And the optimality conditions (7) indicate that the introduction of slack variable is the root of the nonsparseness problem, since in regression applications, most slack variables end as nonzero [20]. But the introduction of seems inevitable for the representation of training cost and thus the penalty parameter to indicate the tradeoff between training cost and generalization abilities.…”
Section: Least Squares Approximation Sparse Svmmentioning
confidence: 99%
“…The implementation technique for the regression model has benefited from the recently-emerged training algorithm of the least-squares Fig. 2 Simplifying a three class DB2 tree to a single node support vector machine (LS-SVM) (Suykens and Vandewalle 1999;Gestel et al 2004) which is assured to have a sparse solution (Xia et al 2008). Meanwhile, the rest of the nodes in the decision tree each accommodates a standard binary SVM classifier trained on two contrasting classes.…”
Section: Introductionmentioning
confidence: 99%