2016
DOI: 10.1016/j.ins.2015.07.035
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 92 publications
(23 citation statements)
references
References 61 publications
(124 reference statements)
0
23
0
Order By: Relevance
“…The parameters setting for this experiment were conducted with 50 eco-size and 500,000 fitness evaluations over 100 runs as same of reference [20]. The experimental results are compared with seven state-of-the-art PSO variants (FIPS [26], UPSO [25], CLPSO [18], 蠂 PSO [28], BBPSO [29], DMSPSO [30], and DMeSR-PSO [20]) and five metaheuristic algorithms (GA [5], BBO [8], DE [9], PSMBA [31] and ABC [32]). For each function, the performance results are compared in terms of median, mean and standard deviation of all the algorithms.…”
Section: Experimental Results Of Cec2005 Benchmark Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…The parameters setting for this experiment were conducted with 50 eco-size and 500,000 fitness evaluations over 100 runs as same of reference [20]. The experimental results are compared with seven state-of-the-art PSO variants (FIPS [26], UPSO [25], CLPSO [18], 蠂 PSO [28], BBPSO [29], DMSPSO [30], and DMeSR-PSO [20]) and five metaheuristic algorithms (GA [5], BBO [8], DE [9], PSMBA [31] and ABC [32]). For each function, the performance results are compared in terms of median, mean and standard deviation of all the algorithms.…”
Section: Experimental Results Of Cec2005 Benchmark Functionmentioning
confidence: 99%
“…Table 9 illustrates the experimental results of six (F1-F6) CEC 2005 benchmark functions for dimension 50. In Table 9, results except proposed HSOS are taken from references [20]. From Table 9, it can be detected that for function F1, F4 and F6, HSOS performs better than other algorithms; for function F2 and F3 the best result provides by the DMeSR-PSO method and for F5, provides the best result by DE method.…”
Section: Experimental Results Of Cec2005 Benchmark Functionmentioning
confidence: 99%
“…The test results prove MLPSO-STP performs better than classical PSO variants on selected benchmark functions. Tanweer et al [1] divided the whole swarm into mentors, independents and mentees according to the particle's fitness value and the Euclidian distance between the particle and Gbest. The high-quality mentors guide low quality mentees and the manner of learning different dimensions from different particle is adopted.…”
Section: The Social Learning Leadermentioning
confidence: 99%
“…Traditional optimization methods such as least square approximation, gradient descent and Newton methods belong to single point optimizers and need gradient information. Hence, most of the traditional optimizers are unfit for complex multimodal problems and non-differentiable optimization problems [1]. To cope with complex optimization problems, several swarm intelligence (SI) algorithms such as particle swarm optimization (PSO) [2,3], ant colony optimization (ACO) [4], artificial bee colony (ABC) [5], cuckoo search algorithm (CS) [6,7], grey wolf optimizer (GWO) [8], et al have been proposed over the past decades.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation