2014
DOI: 10.1016/j.eswa.2013.08.019
|View full text |Cite
|
Sign up to set email alerts
|

Quadratic optimization fine tuning for the Support Vector Machines learning phase

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Ding [19] suggested the FVI-SVM algorithm, which narrowed the volume of the training data set using a KKT (Karush-Kuhn-Tucker) condition and improved classification efficiency. Gonzalez-Mendoza [20] introduced KKT conditions and considered the KKT optimality conditions in order to present a strategy to implement the SVM-QP (quadratic optimization problem based on Support Vector) Machines. Among those studied, the BatchSVM incremental learning algorithm could continually accumulate more support vector sets, but when data endlessly increased, it could bring too great a burden to perform the computations.…”
Section: Related Work and Problem Analysis 21 Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Ding [19] suggested the FVI-SVM algorithm, which narrowed the volume of the training data set using a KKT (Karush-Kuhn-Tucker) condition and improved classification efficiency. Gonzalez-Mendoza [20] introduced KKT conditions and considered the KKT optimality conditions in order to present a strategy to implement the SVM-QP (quadratic optimization problem based on Support Vector) Machines. Among those studied, the BatchSVM incremental learning algorithm could continually accumulate more support vector sets, but when data endlessly increased, it could bring too great a burden to perform the computations.…”
Section: Related Work and Problem Analysis 21 Related Workmentioning
confidence: 99%
“…Moreover, data stream, which is real-time continuous, will pose a heavy burden on computation. An incremental learning algorithm based on a KKT condition [20] will also decrease the sample amount in incremental samples, which will take part in the next step of training. This lowers the classification accuracy because a lot of incremental samples are filtered at the same time.…”
Section: Problem Analysismentioning
confidence: 99%
See 2 more Smart Citations