2015
DOI: 10.1016/j.ins.2014.08.039
|View full text |Cite
|
Sign up to set email alerts
|

A social learning particle swarm optimization algorithm for scalable optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
283
0
5

Year Published

2016
2016
2021
2021

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 652 publications
(288 citation statements)
references
References 59 publications
0
283
0
5
Order By: Relevance
“…This paper aims to push the boundary of surrogate-assisted optimization techniques by proposing a surrogate-assisted cooperative swarm optimization algorithm, SA-COSO for short, for solving highdimensional time-consuming optimization problems up to a dimension of 100. The SA-COSO consists of two cooperative PSO variants, one being a PSO with a constriction factor [52] and the other a social learning based PSO (SL-PSO) [53]. These two PSO variants cooperate in such a way that a particle in the PSO learns not only from its personal and global best particles, but also from the global best of the SL-PSO, whereas the particles in the SL-PSO may learn also from promising solutions contributed by the PSO.…”
Section: ) Local-surrogate Assisted Metaheuristic Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…This paper aims to push the boundary of surrogate-assisted optimization techniques by proposing a surrogate-assisted cooperative swarm optimization algorithm, SA-COSO for short, for solving highdimensional time-consuming optimization problems up to a dimension of 100. The SA-COSO consists of two cooperative PSO variants, one being a PSO with a constriction factor [52] and the other a social learning based PSO (SL-PSO) [53]. These two PSO variants cooperate in such a way that a particle in the PSO learns not only from its personal and global best particles, but also from the global best of the SL-PSO, whereas the particles in the SL-PSO may learn also from promising solutions contributed by the PSO.…”
Section: ) Local-surrogate Assisted Metaheuristic Algorithmsmentioning
confidence: 99%
“…In these variants, the comprehensive learning particle swarm optimizer (CLPSO) [59], the competitive swarm optimizer [60] and the social learning particle swarm optimization(SL-PSO) [53] showed better performance on preserving the diversity of the swarm and discouraging the premature convergence. Experimental results in [53] showed that SL-PSO has a higher computational efficiency in comparison with some representative PSO variants including CLPSO.…”
Section: A Particle Swarm Optimization Variantsmentioning
confidence: 99%
“…The initial points that serve as the initial PSO particles are provided by skeletonization. A social variation of PSO is introduced in [Cheng and Jin 2015], inspired by animals learning in nature from observing their peers. Each particle starts with a random solution and a fitness function is used to evaluate each solution.…”
Section: A:19mentioning
confidence: 99%
“…Although PSO has witnessed a great success over the past two decades, its performance is still limited when the optimization problem has a highdimensional and complex search space [25,11]. In order to enhance the performance of PSO, a number of PSO variants have been proposed, including the parameter adaptation based variants [44,20], the new topological structure based variants [26,9], and the hybridization based variants [7,37], to name a few.…”
Section: The Canonical Pso Algorithmmentioning
confidence: 99%