2012
DOI: 10.3233/jcm-2012-0402
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing infill sampling criteria for surrogate-based constrained optimization

Abstract: A popular approach to handling constraints in surrogate-based optimization is through the addition of penalty functions to an infill sampling criterion that seeks objective improvement. Typical sampling metrics, such as expected improvement tend to have multimodal landscapes and can be difficult to search. When the problem is transformed using a penalty approach the search can become riddled with cliffs and further increases the complexity of the landscape. Here we avoid searching this aggregated space by trea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 46 publications
0
3
0
Order By: Relevance
“…For example, the evaluation of the constraint functions may be time-consuming, and consequently, surrogates need to be built for constraints [29], [45], [46]. Since constraints can be handled using penalty functions [47], surrogates are built to approximate the penalty function instead of the individual constraint functions [48], [49]. As whether a candidate solution is feasible or not can be seen as a classification problem, support vector machine [50], [51], k-nearest neighbors algorithm [52], and linear hyper-plane estimator [53] have been employed to distinguish feasible solutions from infeasible ones.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the evaluation of the constraint functions may be time-consuming, and consequently, surrogates need to be built for constraints [29], [45], [46]. Since constraints can be handled using penalty functions [47], surrogates are built to approximate the penalty function instead of the individual constraint functions [48], [49]. As whether a candidate solution is feasible or not can be seen as a classification problem, support vector machine [50], [51], k-nearest neighbors algorithm [52], and linear hyper-plane estimator [53] have been employed to distinguish feasible solutions from infeasible ones.…”
Section: Introductionmentioning
confidence: 99%
“…18 Other works have utilized governing properties of the surrogatemodel, such as Gaussian process models, to impose an expected improvement metric with added constraints on the sampling. 19 Yet, with respect to surrogate-model generation for process-system applications, simple box constraints are usually used to confine the sampling domain. 2,6,20 That is, for each independent variable x i , simple lower and upper bounds are imposed,…”
mentioning
confidence: 99%
“…10, can be selected to enrich the database. More specifically, following the suggestions by Parr et al[31], the individual maximizing the product of both quantities is promoted, ensuring the exploration of the design space with configurations fulfilling the constraints.0.05 0.10 0.15 0.20 0.25 0.30 0Configurations of spoiler belonging to the Pareto Front during the inner loop optimization process, the red dot corresponds to the point of the front maximizing E I × PF…”
mentioning
confidence: 99%