2011
DOI: 10.1016/j.eswa.2011.06.029
|View full text |Cite
|
Sign up to set email alerts
|

Randomization in particle swarm optimization for global search ability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(23 citation statements)
references
References 25 publications
0
23
0
Order By: Relevance
“…Wu, et al (2014) introduced a superior solution-guided PSO framework in which each particle can comprehensively learn from a collection of superior solutions. Zhou, et al (2011) introduced random position PSO (RPPSO) in which a random particle is used to guide a swarm if a randomly generated number is smaller than the proposed probability Chen, et al (2013) incorporated an aging mechanism into PSO and proposed a PSO with an aging leader and challengers (ALC-PSO). When the leader of the ALC-PSO swarm is no longer effective in improving the population, its leading power is gradually deteriorated and eventually replaced with a new emerging particle that aims to challenge and claim the leadership.…”
Section: Pso Variants and Improvementsmentioning
confidence: 99%
“…Wu, et al (2014) introduced a superior solution-guided PSO framework in which each particle can comprehensively learn from a collection of superior solutions. Zhou, et al (2011) introduced random position PSO (RPPSO) in which a random particle is used to guide a swarm if a randomly generated number is smaller than the proposed probability Chen, et al (2013) incorporated an aging mechanism into PSO and proposed a PSO with an aging leader and challengers (ALC-PSO). When the leader of the ALC-PSO swarm is no longer effective in improving the population, its leading power is gradually deteriorated and eventually replaced with a new emerging particle that aims to challenge and claim the leadership.…”
Section: Pso Variants and Improvementsmentioning
confidence: 99%
“…The majority of the tested PSO variants undergo different degrees of performance degradation while addressing rotated problems because the rotating operation introduces a non-separable characteristic into this problem category. (Zhan et al, 2009) Fully connected ω : 0:9 À 0:4, c 1 þ c 2 : ½3:0; 4:0, δ ¼ ½0:05; 0:1, σmax ¼ 1:0, σ min ¼ 0:1 FLPSO-QIW (Tang et al, 2011) Comprehensive learning ω : 0:9 À 0:2, c 1 : 2 À 1:5, c 2 : 1 À 1:5, m ¼ 1, P i ¼ ½0:1; 1, K 1 ¼ 0:1, K 2 ¼ 0:001, σ 1 ¼ 1, σ 2 ¼ 0 FlexiPSO (Kathrada, 2009) Fully connected and local ring ω : 0:5 À 0:0, c 1 ; c 2 ; c 3 : ½0:0; 2:0, ε ¼ 0:1, α ¼ 0:01% FPSO (Montes de Oca et al, 2009b) Time-varying χ ¼ 0:729, ∑c i ¼ 4:1 OLPSO-L (Zhan et al, 2011) Orthogonal learning ω : 0:9 À 0:4, c ¼ 2:0, G ¼ 5 PAE-QPSO (Fu et al, 2012) Fully connected μ ¼ ½0; 1, β : 1:0 À 0:5, RPPSO (Zhou et al, 2011) Random ω : 0:9 À 0:4, c large ¼ 6, c small ¼ 3 MoPSO (Beheshti et al, 2013) Fully connected No parameters are involved PSODDS (Jin et al, 2013) Fully connected χ ¼ 0:7298, c 1 ¼ c 2 ¼ 2:05 PSO-DLTA Fully connected and local ring ω : 0:9 À 0:4, c 1 ¼ c 2 ¼ 2:0, z¼ 8…”
Section: Comparison Of Pso-dlta With Other State-of-the-art Pso Variantsmentioning
confidence: 99%
“…The selection of strategies for each particle is based on a ratio that is derived from a self-adaptively improved probability model. Zhou et al (2011) introduced random position PSO (RPPSO), wherein a random particle is used to guide the swarm if a randomly generated number is smaller than the proposed probability. Ho et al (2008) developed the orthogonal PSO (OPSO) using the orthogonal experiment design (OED) technique (Montgomery, 1991).…”
Section: Pso Variants and Improvementsmentioning
confidence: 99%
“…The acceleration factors c 1 and c 2 indicate the relative attraction toward pbest and gbest respectively and also rand 1 and rand 2 are random numbers uniformly distributed between zero and one [13]. Inertia weight parameter w controls the trade-off between the global search ability and local search ability during the optimization process [23]. In order to avoid premature convergence, PSO utilizes a distinctive feature of controlling a balance between global and local exploration of the search space which prevents from being stacked to local minima [13].…”
Section: Standard Particle Swarm Optimizationmentioning
confidence: 99%