2018
DOI: 10.1080/0305215x.2018.1461853
|View full text |Cite
|
Sign up to set email alerts
|

A new Kriging–Bat Algorithm for solving computationally expensive black-box global optimization problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Creating three optimization objectives. The prediction parameterf ,ĝ i and RMSE related to the objective and constraints can be calculated by the kriging model (see Equations (8) and (9)). The three optimization objectives in Equation (16) will be formed by the prediction parameters.…”
Section: The Kmcgo Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Creating three optimization objectives. The prediction parameterf ,ĝ i and RMSE related to the objective and constraints can be calculated by the kriging model (see Equations (8) and (9)). The three optimization objectives in Equation (16) will be formed by the prediction parameters.…”
Section: The Kmcgo Methodsmentioning
confidence: 99%
“…The kriging-based EGO (efficient global optimization) method [7] and its extensions [8][9][10][11][12][13] have solved some optimization problems of expensive black-box objectives or constraints. Most of them are single-objective optimization approaches, which can only get one sampling point in per infill search loop.…”
Section: Introductionmentioning
confidence: 99%
“…Four strategies were also proposed in (Bahmani-Firouzi et al 2014) for updating the bat velocities in which an accumulator for each strategy is computed and used to determine the probability of selecting that strategy. (Saad et al 2018) recently utilized a kriging surrogate model to solve Computationally Expensive Black-Box optimization problems. Despite the several efforts, the possibilities for enhancement are still on, as there is no algorithm that is ultimately perfect as it is indicated by the no free lunch theorem (Ho, Y.…”
Section: Introductionmentioning
confidence: 99%