2020
DOI: 10.1155/2020/4968063
|View full text |Cite
|
Sign up to set email alerts
|

An Enhanced Comprehensive Learning Particle Swarm Optimizer with the Elite-Based Dominance Scheme

Abstract: In recent years, swarm-based stochastic optimizers have achieved remarkable results in tackling real-life problems in engineering and data science. When it comes to the particle swarm optimization (PSO), the comprehensive learning PSO (CLPSO) is a well-established evolutionary algorithm that introduces a comprehensive learning strategy (CLS), which effectively boosts the efficacy of the PSO. However, when the single modal function is processed, the convergence speed of the algorithm is too slow to converge qui… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 114 publications
0
13
0
Order By: Relevance
“…We initially present the comparison of proposed algorithm, EDCQPSO, with others through 30 classic benchmark functions in IEEE CEC2017 [46], as shown in Table 1. The performance of our algorithm on benchmark functions was verified.…”
Section: Experimental Setup and Performance Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…We initially present the comparison of proposed algorithm, EDCQPSO, with others through 30 classic benchmark functions in IEEE CEC2017 [46], as shown in Table 1. The performance of our algorithm on benchmark functions was verified.…”
Section: Experimental Setup and Performance Analysismentioning
confidence: 99%
“…The comparison of mean values and standard deviation after thirty iterations on thirty benchmark functions are listed. Table 4 shows that EDCQPSO ranks first, followed sequentially by GWO-GOA, GHO, GWO, IWO, EBFO, and DA, based on overall rank for CE01-CE30 functions of CEC2017 [46]. On three unimodal test functions (CE01-CE03), EDCQPSO performs better than other algorithms.…”
Section: A Comparisons Of the Edcqpso With Other Swarm Algorithmsmentioning
confidence: 99%
“…As far as swarm intelligence optimization algorithms are concerned, a number of related algorithms have been proposed, including grey wolf optimization (GWO) [ 55 ], moth-flame optimization (MFO) [ 56 ], PSO [ 57 ], sine cosine algorithm (SCA) [ 58 ], whale optimizer (WOA) [ 59 ], multi-verse optimizer (MVO) [ 60 ], Harris hawks optimization (HHO) 1 [ 61 ], slime mould algorithm (SMA) 2 [ 62 ], hunger games search (HGS) 3 [ 63 ], Runge Kutta optimizer (RUN) 4 [ 64 ], modified SCA (m_SCA) [ 65 ], boosted GWO (OBLGWO) [ 66 ], opposition-based SCA (OBSCA) [ 67 ], A-C parametric WOA (ACWOA) [ 68 ], biogeography-based learning PSO (BLPSO) [ 69 ], comprehensive learning PSO (CLPSO) [ 70 ], moth-flame optimizer with sine cosine mechanisms (SMFO) [ 71 ], enhanced comprehensive learning particle swarm optimizer (GCLPSO) [ 72 ], enhanced GWO with a new hierarchical structure (IGWO) [ 73 ], improved WOA (IWOA) [ 74 ], and ant colony optimization (ACO) for continuous domains (ACOR) [ 75 ]. Notably, it is well known that ACO [ 76 , 77 ] is an algorithm for solving discrete optimization problems, whereas ACOR can be used to solve optimization problems other than discrete ones.…”
Section: Introductionmentioning
confidence: 99%
“…com/HHO.html, accessed on 28 August 2021) [81,103,104], genetic algorithm (GA) [105], chaotic BA (CBA) [106], multi-verse optimizer (MVO) [107], cuckoo search via Lévy flights (CS) [108], firefly algorithm (FA) [109], salp swarm algorithm (SSA) [110,111], gravitational search algorithm (GSA) [112], ant colony optimization (ACO) [72,113,114], krill herd algorithm (KHA) [115], artificial bee colony (ABC) [116]. Meanwhile, there are many corre-sponding improvement algorithms [70,117], such as enhanced comprehensive learning particle swarm optimization (GLOPSO) [118], chaotic moth-flame optimization (CMFO) [91], hybridizing grey wolf optimization (HGWO) [119], balanced whale optimization algorithm (BWOA) [120], double adaptive random spare reinforced whale optimization algorithm (RDWOA) [121], chaotic mutative moth-flame-inspired optimizer (CLSGMFO) [122], orthogonal learning sine cosine algorithm (OLSCA) [88], multi-strategy enhanced sine cosine algorithm (MSCA) [123], enhanced whale optimizer with associative learning (BM-WOA) [124], enhanced moth flame optimization (SMFO) [125], ant colony optimizer with random spare strategy and chaotic intensification strategy (RCACO) [126], etc.…”
Section: Introductionmentioning
confidence: 99%