2005
DOI: 10.1007/11538059_75
|View full text |Cite
|
Sign up to set email alerts
|

Methods of Decreasing the Number of Support Vectors via k-Mean Clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(14 citation statements)
references
References 10 publications
0
14
0
Order By: Relevance
“…It is also interesting to compare these results to those of Chen et al (2010), in which they compared the linear kernel, the polynomial kernel and the Gaussian kernel and concluded that no single kernel dominated the volatility predictions. The number of support vectors is important in SVM application, as a well-performed SVM is expected approximately to outline an entire dataset from a small fraction of input data (Xia et al, 2005). In general, for both in-sample and out-of-sample data, most NMSE and NMAE values present a decreasing trend, whereas DA increases from Table 1 to Table 3.…”
Section: Monte Carlo Experiments and Results Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…It is also interesting to compare these results to those of Chen et al (2010), in which they compared the linear kernel, the polynomial kernel and the Gaussian kernel and concluded that no single kernel dominated the volatility predictions. The number of support vectors is important in SVM application, as a well-performed SVM is expected approximately to outline an entire dataset from a small fraction of input data (Xia et al, 2005). In general, for both in-sample and out-of-sample data, most NMSE and NMAE values present a decreasing trend, whereas DA increases from Table 1 to Table 3.…”
Section: Monte Carlo Experiments and Results Comparisonmentioning
confidence: 99%
“…Table 4 shows that the number of support vectors in the wavelet kernel-based SVM is lower than that in the Gaussian kernel-based SVM. The number of support vectors is important in SVM application, as a well-performed SVM is expected approximately to outline an entire dataset from a small fraction of input data (Xia et al, 2005). For the training data, fewer support vectors will lead to sparse datasets when solving the quadratic programming GARCH 271 267 231 201 GJR-GARCH 198 181 141 139 TS-GARCH 468 300 468 205 T-GARCH 174 346 146 305 optimization problem.…”
Section: Monte Carlo Experiments and Results Comparisonmentioning
confidence: 99%
“…Combining kmeans and SVM has been studied before, but not in the context of NALM. Recognizing that in a majority of cases a large portion of the input data is redundant for training, in [28], k-means is used to decrease the number of support vectors and the training set size. Similarly, in [12], [13], k-means is employed to select a subset of original data for the SVM training.…”
Section: B Classificationmentioning
confidence: 99%
“…From inequality (1), it can be deduced that a small number of support vectors will generate a small testing error and also leads to better generalization capability in SVM [9].Successful use of k-means requires a cautiously selected distance measure that demonstrates the properties of the clustering task. Designing the distance measure by hand is a tough job.…”
Section: E[pr(error )]<= E[ Number Of Support Vectors]/number Of Traimentioning
confidence: 99%