2014
DOI: 10.4304/jsw.9.2.350-357
|View full text |Cite
|
Sign up to set email alerts
|

Particle Swarm Optimization Algorithm with Reverse-Learning and Local-Learning Behavior

Abstract: In order to resolve conflict between convergence speed and population diversity of particle swarm optimization (PSO) algorithm, an improved PSO, called reverse-learning and local-learning PSO (RLPSO) algorithm, is presented in which a reverse-learning behavior implemented by some particles while local-learning behavior adopted by elite particles in each generation. During the reverse-learning process, some inferior particles of initial population and each particle's historical worst position are reserved to at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(12 citation statements)
references
References 15 publications
0
12
0
Order By: Relevance
“…At the same time, one‐year national debt is used as risk‐free assets (a yield of 1.95%) on the assumption that a total of 1 million assets can be used for investment and no securities have been purchased before investment [21, 22].…”
Section: Experiments Results and Discussionmentioning
confidence: 99%
“…At the same time, one‐year national debt is used as risk‐free assets (a yield of 1.95%) on the assumption that a total of 1 million assets can be used for investment and no securities have been purchased before investment [21, 22].…”
Section: Experiments Results and Discussionmentioning
confidence: 99%
“…With increase of , random search of distribution transits gradually from that of Cauchy distribution to that of Gaussian distribution, and random search of distribution is similar to that of Gaussian distribution with good ability of local development in the later stage. Tizhoosh [19] is a new technology applied to intelligent computing area, and it has been successfully applied in many intelligent algorithm optimizations [20][21][22]. Theoretically verified by Zhong and others [23], opposition-based learning can acquire a solution that closes to global optimal solution with a higher probability.…”
Section: Random Search Strategy Based On Dynamic Adaptive Distributionmentioning
confidence: 99%
“…The performance of RBF network is determined by the parameters of the network, which is the center and variance of the basis function as well as network weights [10]. If the chaotic particle swarm algorithm is used for neural network training, then it is not easy for particles to get into the local optimum [11][12]. This algorithm can also expand the search space, search the global optimal solution, and speed up the fitting rate of the neural network training algorithm.…”
Section: The Algorithm Of Rbf Neural Network With Improved Chaos mentioning
confidence: 99%