2013
DOI: 10.1155/2013/602341
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Sparse Least Squares Support Vector Machines

Abstract: The solution of a Least Squares Support Vector Machine (LS-SVM) suffers from the problem of nonsparseness. The Forward Least Squares Approximation (FLSA) is a greedy approximation algorithm with a least-squares loss function. This paper proposes a new Support Vector Machine for which the FLSA is the training algorithm—the Forward Least Squares Approximation SVM (FLSA-SVM). A major novelty of this new FLSA-SVM is that the number of support vectors is the regularization parameter for tuning the tradeoff between … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…In addition, in studies conducted earlier, the only indicator of the quality of the classification was the reliability of the classification. It is desirable to evaluate other classification indicators, in particular, the proportion of erroneous decisions on the quality of the shot based on the parameters of the acoustic field based on the classifier by the Least Squares Support Vector Machine (LSSVM) [28,29]. This method requires a smaller training sample and has a higher performance.…”
Section: The Study Materials and Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, in studies conducted earlier, the only indicator of the quality of the classification was the reliability of the classification. It is desirable to evaluate other classification indicators, in particular, the proportion of erroneous decisions on the quality of the shot based on the parameters of the acoustic field based on the classifier by the Least Squares Support Vector Machine (LSSVM) [28,29]. This method requires a smaller training sample and has a higher performance.…”
Section: The Study Materials and Methodsmentioning
confidence: 99%
“…As a result, the LSSVM method reduces the estimated time by almost an order of magnitude with the same classification quality indicators. In addition, the use of the LSSVM method makes it possible to reduce the volume of the training sample by 2-4 times [41]. In addition, there is a well-established and verified toolbox from the MATLAB system for this method [42].…”
Section: Procedures For Categorizing the Effectiveness Of A Single Shot From A Gunmentioning
confidence: 99%
“…An alternative solution to sparsity improvement of LS-SVM is the application of the Kernel Matching Pursuit (KMP) algorithm. Since the KMP algorithm have been described in detail in [23,24], the paper revisited the idea of the KMP from the perspective of QR decomposition.…”
Section: Kernel Matching Pursuit (Kmp) For Sparsity Improvementmentioning
confidence: 99%
“…The Kernel matching pursuit (KMP) algorithm constructed the solution in a greed manner and at each iteration, selected the sample which led to the maximal drop in the sum of squared error loss [23,16]. Because the KMP algorithm facilitates a direct control of the sparsity of the solution, it has been applied to ease the non-sparseness of the LS-SVM [24]. KMP was closely related to the orthogonal least squares (OLS) method in the field of nonlinear model identification [4].…”
Section: Introductionmentioning
confidence: 99%
“…In view of the sparseness of LS-SVM, a fixed-size LS-SVM was proposed to realize the sparse representation in primal weight space [3]. Upon adding a bias term to the objective function, LS-SVM was solved through forward least squares approximation; thus, a sparse solution is generated in the least square sense [4]. Afterward, a sparse algorithm of least squares support vector regression was established based on Householder transformation [5].…”
Section: Introductionmentioning
confidence: 99%