2020
DOI: 10.1016/j.asoc.2019.106038
|View full text |Cite
|
Sign up to set email alerts
|

Non-inertial opposition-based particle swarm optimization and its theoretical analysis for deep learning applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(4 citation statements)
references
References 36 publications
0
4
0
Order By: Relevance
“…The searching abilities of the metaheuristic optimization algorithms are improved using the different opposition based learning methods as the opposite candidate solutions have a higher probability of being nearer to the optimal solution than the randomly generated solutions and are more beneficial for generating new solutions in the search phases (Rahnamayan et al, 2008b;Roy et al, 2014;Ventresca et al, 2010). Different opposition methods are proposed and integrated with the algorithms to enhance their search abilities (El-Abd, 2011;Han & He, 2007;Kang et al, 2020;Kumar, Mandal, & Chakraborty, 2020;Rahnamayan et al, 2008a). In the SABES algorithm, the dynamic-opposite learning method is used at the initialization and search phase to enhance the performance of the algorithm to avoid the problem of local optimal stagnation and premature convergence.…”
Section: Dynamic-opposite Learning Based Improvement In Sabesmentioning
confidence: 99%
“…The searching abilities of the metaheuristic optimization algorithms are improved using the different opposition based learning methods as the opposite candidate solutions have a higher probability of being nearer to the optimal solution than the randomly generated solutions and are more beneficial for generating new solutions in the search phases (Rahnamayan et al, 2008b;Roy et al, 2014;Ventresca et al, 2010). Different opposition methods are proposed and integrated with the algorithms to enhance their search abilities (El-Abd, 2011;Han & He, 2007;Kang et al, 2020;Kumar, Mandal, & Chakraborty, 2020;Rahnamayan et al, 2008a). In the SABES algorithm, the dynamic-opposite learning method is used at the initialization and search phase to enhance the performance of the algorithm to avoid the problem of local optimal stagnation and premature convergence.…”
Section: Dynamic-opposite Learning Based Improvement In Sabesmentioning
confidence: 99%
“…One set was randomly generated, and the other set was generated in the opposite direction based on the existing position. Literature [23] used the opposite direction for learning strategies. e reason for the opposite direction is because in a given environment, our search direction may be opposite to the direction of the optimal solution.…”
Section: Opposite Directionmentioning
confidence: 99%
“…OBL takes simultaneously advantage of a current estimate solution and its opposite point to improve the search capacity of the included algorithm. OBL concept has been incorporated into many metaheuristics such as Particle Swarm Optimization [20], Harmony Search [21] and Whale Optimization Algorithm [22]. SSA, JAYA and OBL methods have been coupled with each other in the literature before to solve various optimization problems [23,24], however, to the best knowledge of the authors, these three techniques have never been incorporated before.…”
Section: Introductionmentioning
confidence: 99%