2020
DOI: 10.1007/s12652-020-02153-1
|View full text |Cite
|
Sign up to set email alerts
|

A better exploration strategy in Grey Wolf Optimizer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 58 publications
(25 citation statements)
references
References 27 publications
0
25
0
Order By: Relevance
“…In [40], Singh and Bansal proposed a new grey wolf optimizer compound with crossover and oppositionbased learning named GWO-XOBL to overcome inadequate variety of wolves prone which led to local optima, and at the end, GWO performance will decrease, the new algorithm evaluated using 13 well-known standard benchmark problems to give expressively performance enhancement compared to GWO and other algorithms. In [41], Bansal and Singh proposed enhancement to some issues related to GWO algorithm, low exploration and slow convergence rate using explorative equation and opposition-based learning (OBL), where results show better effectiveness than other metaheuristic algorithms; 23 standard benchmark test problems are used to validate the proposed enhancement. In [42], a modified GWO (MGWO) was proposed to schedule jobs on virtual machines and enhance performance.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In [40], Singh and Bansal proposed a new grey wolf optimizer compound with crossover and oppositionbased learning named GWO-XOBL to overcome inadequate variety of wolves prone which led to local optima, and at the end, GWO performance will decrease, the new algorithm evaluated using 13 well-known standard benchmark problems to give expressively performance enhancement compared to GWO and other algorithms. In [41], Bansal and Singh proposed enhancement to some issues related to GWO algorithm, low exploration and slow convergence rate using explorative equation and opposition-based learning (OBL), where results show better effectiveness than other metaheuristic algorithms; 23 standard benchmark test problems are used to validate the proposed enhancement. In [42], a modified GWO (MGWO) was proposed to schedule jobs on virtual machines and enhance performance.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Grey wolf optimizer [41][42][43][44][45][46][47][48][49] is a new heuristic meta influenced via grey wolves; this illustrates the hierarchy chain of leadership and the grey wolf's hunting process. Based on four types of wolves, namely, alpha (𝛼), beta (𝛽), delta (𝛿), and omega (𝜔), the leadership hierarchy is proposed in a pack comprised of 5-12 wolves on average.…”
Section: Background Workmentioning
confidence: 99%
“…There are quad types of Grey wolves which are named as α , β , δ and ω hired for mimicking the leadership hierarchy 103 . All grey wolves in a pack strictly follow the social hierarchy, the layer of hierarchy decreases from α to ω search agents 104 . The top‐level in the ladder of Grey wolves is α.…”
Section: Meta‐heuristic Optimization Algorithmsmentioning
confidence: 99%