2015
DOI: 10.1016/j.simpat.2015.08.002
|View full text |Cite
|
Sign up to set email alerts
|

An adaptive global variable fidelity metamodeling strategy using a support vector regression based scaling function

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 75 publications
(17 citation statements)
references
References 45 publications
0
17
0
Order By: Relevance
“…If the insensitive loss function ε is adopted, the aim of SVR is to look for an f ( x ) which can make the difference between the true and the training values be less than the given error ε . Thus, function f ( x ) solution can be expressed as the following quadratic programming problem (Zhou et al ., ): Φ(ω,ξi,ξi*)=12ω2+ci=1normaln(ξi+ξi*) Subject totrue{yi(ωTφ(x)+b)ε+ξi(ωTφ(x)+b)yiε+ξi*(ξi,ξi*)0 Where ξi and ξi* are slack variables that determine the upper and lower excess deviations. 12ω2 is the regularization term, c is the error penalty factor used to regulate the difference between the regularization term and empirical risk, ε is the loss function that determines the accuracy of the training data point.…”
Section: Methodsmentioning
confidence: 99%
“…If the insensitive loss function ε is adopted, the aim of SVR is to look for an f ( x ) which can make the difference between the true and the training values be less than the given error ε . Thus, function f ( x ) solution can be expressed as the following quadratic programming problem (Zhou et al ., ): Φ(ω,ξi,ξi*)=12ω2+ci=1normaln(ξi+ξi*) Subject totrue{yi(ωTφ(x)+b)ε+ξi(ωTφ(x)+b)yiε+ξi*(ξi,ξi*)0 Where ξi and ξi* are slack variables that determine the upper and lower excess deviations. 12ω2 is the regularization term, c is the error penalty factor used to regulate the difference between the regularization term and empirical risk, ε is the loss function that determines the accuracy of the training data point.…”
Section: Methodsmentioning
confidence: 99%
“…144 It also has been shown within numerical analysis that the adaptive sampling methods yield superior surrogate approximation and lower computational expense compared to static techniques. 144 Researchers have reported adaptive sampling techniques for different surrogate model, such as support vector machines, 145,146 artificial neural network, 147 and others. [148][149][150] These are the type of adaptive sampling techniques where sampling points are placed systematically, yet still stochastically.…”
Section: Algorithmsmentioning
confidence: 99%
“…Therefore, many supervised learning algorithms can be effectively used to develop simulation metamodels. These include neural networks (Alam, McNaught, & Ringrose, 2004; Can & Heavey, 2012; Kuo, Yang, Peters, & Chang, 2007; Xanthopoulos & Koulouriotis, 2018), kriging/Gaussian process (Dancik, Jones, & Dorman, 2010; Dosi, Pereira, & Virgillito, 2018; Kleijnen, 2009; Salle & Yıldızoğlu, 2014), SVR (Clarke, Griebsch, & Simpson, 2005; Edali & Yücel, 2018; Fonoberova, Fonoberov, & Mezić, 2013; Zhou, Shao, Jiang, Zhou, & Shu, 2015), RFs (Edali & Yücel, 2019; Villa‐Vialaneix, Follador, Ratto, & Leip, 2012), multivariate adaptive regression splines (Bozağaç, Batmaz, & Oğuztüzün, 2016; Friedman, 1991), radial basis functions (Hussain, Barton, & Joshi, 2002; Jakobsson, Patriksson, Rudholm, & Wojciechowski, 2010; Mullur & Messac, 2006) and first‐ and second‐order linear regression models (Durieux & Pierreval, 2004; Grow, 2017; Happe, Kellermann, & Balmann, 2006; Kleijnen & Deflandre, 2006). There are several studies in the literature which compare different subsets of these techniques based on different criteria such as accuracy, robustness, interpretability and efficiency (i.e., runtime) (Clarke, Griebsch, & Simpson, 2005; Li, Ng, Xie, & Goh, 2010; Østergård, Jensen, & Maagaard, 2018; Van Gelder, Das, Janssen, & Roels, 2014; Villa‐Vialaneix, Follador, Ratto, & Leip, 2012).…”
Section: Proposed Approachmentioning
confidence: 99%