2010
DOI: 10.1007/978-3-642-15822-3_1
|View full text |Cite
|
Sign up to set email alerts
|

Convergence Improvement of Active Set Training for Support Vector Regressors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Note that we are able to take advantage of the specialized structure of Z and Y in (9) and (10) SVM-RSQP algorithm, which always maintains full-rank. It should be noted, however, that pivoting can be implemented, even in nonsingular cases, to improve stability, as the matrix becomes nearly singular.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that we are able to take advantage of the specialized structure of Z and Y in (9) and (10) SVM-RSQP algorithm, which always maintains full-rank. It should be noted, however, that pivoting can be implemented, even in nonsingular cases, to improve stability, as the matrix becomes nearly singular.…”
Section: Methodsmentioning
confidence: 99%
“…In fact, the decomposition method, sequential minimal optimization (SMO) [13]- [15], is a form of the active set method. The particular interest in the conventional active set method, as opposed to decomposition methods, stems from the following important characteristics: improved accuracy, which is especially important for regression applications [10], [3], improved stability across a larger range of SVM parameters, such as the regularization parameter C, resulting in faster training times when C is large [7], [9], [10], incremental training [8], which enables searches over the entire regularization path [5], and finally, improved performance over the decomposition method when the fraction of variables that are bound and nonbound support vectors is relatively small [7], [10]. Overall, the active set method appears to be naturally suited to the SVM problem since the Hessian is dense [3] and the solution is sparse [7] (fraction of nonbound support vectors is expected to be small).…”
mentioning
confidence: 99%
“…An implementation of the primal active set method for SVM classification can be found in [34], for regression in [22]. Yabuwaki [36] and Abe [1] improve the convergence of active set training methods of L2-SVM classifier and regressor respectively. Full treatment on singular reduced Hessian for various cases is provided in [26].…”
Section: Introductionmentioning
confidence: 99%