2008
DOI: 10.1016/j.ins.2008.05.016
|View full text |Cite
|
Sign up to set email alerts
|

Support vector regression from simulation data and few experimental samples

Abstract: a b s t r a c tThis paper considers nonlinear modeling based on a limited amount of experimental data and a simulator built from prior knowledge. The problem of how to best incorporate the data provided by the simulator, possibly biased, into the learning of the model is addressed. This problem, although particular, is very representative of numerous situations met in engine control, and more generally in engineering, where complex models, more or less accurate, exist and where the experimental data which can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(19 citation statements)
references
References 18 publications
0
19
0
Order By: Relevance
“…Following this, Yip [60] compared the predictive performance of CBR with that of MDA, finding CBR clearly outperformed MDA. Support vector machine (SVM) has been gaining popularity recently because of its high generalization performance and global optimal solution [3,7,15,16,22,27,31,54,61]. Studies of SVM-based BFP [4,9,12,14,35,36,45,58] indicate that SVM outperforms BPNN, MDA, and Logit.…”
Section: The Problem Addressed In This Researchmentioning
confidence: 99%
“…Following this, Yip [60] compared the predictive performance of CBR with that of MDA, finding CBR clearly outperformed MDA. Support vector machine (SVM) has been gaining popularity recently because of its high generalization performance and global optimal solution [3,7,15,16,22,27,31,54,61]. Studies of SVM-based BFP [4,9,12,14,35,36,45,58] indicate that SVM outperforms BPNN, MDA, and Logit.…”
Section: The Problem Addressed In This Researchmentioning
confidence: 99%
“…In Table 1, we present experimental comparisons for regression estimation using three representative loss functions: squared loss, Huber's loss (ε=0), and Vapnik's ε-insensitive loss with ε given according to Eq. (8). The noise level (ζ) column indicates the standard deviation of the Gaussian noise with zero mean.…”
Section: A Comparison With Three Loss Functionmentioning
confidence: 99%
“…Therefore, designing a regression approach that performs well with small samples is a significant problem. Support vector www.ijarai.thesai.org regression (SVR) is motivated by the growing popularity of support vector machines (SVM) for regression with small samples (Smola & Scholkopf, 2004;Chu & Keerthi, 2007;Bloch, 2008;Huang, Zheng, et al, 2009). However, the quality of SVR models depends on proper settings of the SVR hyperparameters, and the main issue for practitioners trying to apply SVR is determining these parameter values for a given data set.…”
Section: Introductionmentioning
confidence: 99%
“…The SVR model also has been improved by prior knowledge [11,12]. There are numerous types of prior knowledge, including the average value and monotonicity of the sample data.…”
Section: Introductionmentioning
confidence: 99%