2006
DOI: 10.1007/11875604_70
|View full text |Cite
|
Sign up to set email alerts
|

Improving SVM-Linear Predictions Using CART for Example Selection

Abstract: Abstract. This paper describes the study on example selection in regression problems using µ-SVM (Support Vector Machine) linear as prediction algorithm. The motivation case is a study done on real data for a problem of bus trip time prediction. In this study we use three different training sets: all the examples, examples from past days similar to the day where prediction is needed, and examples selected by a CART regression tree. Then, we verify if the CART based example selection approach is appropriate on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…As previously mentioned, the leaf node approach is not problem dependent and, consequently, there is an open question: is the leaf node approach promising in other domains? The answer to this question is given in [38]. The leaf node approach is tested in eleven regression data sets [49] just for SVM -linear.…”
Section: Results On Example Selectionmentioning
confidence: 99%
“…As previously mentioned, the leaf node approach is not problem dependent and, consequently, there is an open question: is the leaf node approach promising in other domains? The answer to this question is given in [38]. The leaf node approach is tested in eleven regression data sets [49] just for SVM -linear.…”
Section: Results On Example Selectionmentioning
confidence: 99%
“…) comprise the pair of input and output vectors, w is the weight factor, b is the threshold value, C is the penalty parameter, kernel function Φ is used to map input samples to higher space, ε is the epsilon parameter, and the upper and lower training errors are i ξ and i ξ * , respectively. For the kernel function, there are three typical kernel functions which are RBF, polynomial function and linear function [22][23][24]. These kernel functions are used to map nonlinear between the input and feature space.…”
Section: Support Vector Regressionmentioning
confidence: 99%
“…Some of the first research in this area used neural networks to predict the Tokyo stock market (Mizuno, Kosaka, Yajima, Komoda, 1998;Kimoto, Asakawa, Yoda, Takeoka, 1990). Some other similar studies have used the Bayes classifier (Pop, 2006;Shin, Kil, 1998;Tsaih, Hsu, Lai, 1998;) and support vector machines (Ince, Trafalis, 2007;Moreira, Jorge, Soares, Sousa, 2006). Also, many studies compare the performance of different methods such as neural networks, support vector machines, k-nearest neighbours, naïve Bayes classifier, genetic algorithms, decision trees etc.…”
Section: Introductionmentioning
confidence: 99%