2018
DOI: 10.1016/j.neunet.2018.07.008
|View full text |Cite
|
Sign up to set email alerts
|

Training sparse least squares support vector machines by the QR decomposition

Abstract: The solution of an LS-SVM has suffered from the problem of non-sparseness. The paper proposed to apply the KMP algorithm, with the number of support vectors as the regularization parameter, to tackle the non-sparseness problem of LS-SVMs. The idea of the kernel matching pursuit (KMP) algorithm was first revisited from the perspective of the QR decomposition of the kernel matrix on the training set. Strategies are further developed to select those support vectors which minimize the leave-one-out cross validatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…The improved algorithm only computes the conjugate gradient once. In addition, Xia used QR decomposition to train sparse LS-SVM models similar to SVM [11]. Sparse kernel matrix is easy to calculate and fast in training.…”
Section: Introductionmentioning
confidence: 99%
“…The improved algorithm only computes the conjugate gradient once. In addition, Xia used QR decomposition to train sparse LS-SVM models similar to SVM [11]. Sparse kernel matrix is easy to calculate and fast in training.…”
Section: Introductionmentioning
confidence: 99%
“…It is formulated into a convex quadratic programming and can be solved effciently. However, standard SVM, minimizing the hinge loss function and L 2 norm, only leads to sparsity for the dual variables, but not the primal variables [20,21,22]. To handle the big omics data problem with a lot of features, support vector machines with other penalties including L 1 and elastic net have been proposed for feature selection and prediction [23,24,25,26,27,28].…”
Section: Introductionmentioning
confidence: 99%