2002
DOI: 10.1162/089976602760128081
|View full text |Cite
|
Sign up to set email alerts
|

Training v-Support Vector Regression: Theory and Algorithms

Abstract: We discuss the relation between epsilon-support vector regression (epsilon-SVR) and nu-support vector regression (nu-SVR). In particular, we focus on properties that are different from those of C-support vector classification (C-SVC) and nu-support vector classification (nu-SVC). We then discuss some issues that do not occur in the case of classification: the possible range of epsilon and the scaling of target values. A practical decomposition method for nu-SVR is implemented, and computational experiments are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
140
0
3

Year Published

2007
2007
2018
2018

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 282 publications
(144 citation statements)
references
References 11 publications
1
140
0
3
Order By: Relevance
“…ν-SVR is used in this study since it requires less tuning and fewer number of parameters than ɛ-SVR. It also automatically minimizes the loss function and has been shown to support more meaningful data interpretation [41,42]; this premise is validated by results from this research.…”
Section: Theorysupporting
confidence: 62%
“…ν-SVR is used in this study since it requires less tuning and fewer number of parameters than ɛ-SVR. It also automatically minimizes the loss function and has been shown to support more meaningful data interpretation [41,42]; this premise is validated by results from this research.…”
Section: Theorysupporting
confidence: 62%
“…In such there is no necessity to map the data to a higher dimensional space as the non-linear mapping here does not improve the model performance, while significantly affects the requirements for computing power and time [47]. This assumption was confirmed by Bray and Han who tested the performance of different kernels in the LIBSVM toolkit for runoff modeling [48] and identified nu-SVR with the linear kernel as an optimal configuration in terms of learning capabilities.…”
Section: Model Setup and Learningmentioning
confidence: 83%
“…The two variants -SVR [8] and ν-SVR [9], [10] were evaluated. The free model parameters and ν for the respective algorithms, and C as the regularization parameter in both approaches, were experimentally chosen.…”
Section: Applied Machine Learning Methodsmentioning
confidence: 99%