2018
DOI: 10.1007/s13369-018-3536-0
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Nelder–Mead Algorithm and Dragonfly Algorithm for Function Optimization and the Training of a Multilayer Perceptron

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
41
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 43 publications
(42 citation statements)
references
References 41 publications
0
41
0
1
Order By: Relevance
“…A huge number of social interactions in DA causes trapping into local optima, solving problems with less accuracy, and with an improper balance of exploitation and exploration. In reference [27], to overcome these deficiencies, DA combined by a better type of the Nelder-Mead algorithm or so-called INMDA for making the ability of local exploration more powerful, and avoid falling into local optima, the INMDA can be divided into two steps, the first step, DA utilized to explore the solution space. It provided the necessary variety to the artificial dragonflies to find the global optimum.…”
Section: Hybridized Versions Of Dragonfly Algorithmmentioning
confidence: 99%
“…A huge number of social interactions in DA causes trapping into local optima, solving problems with less accuracy, and with an improper balance of exploitation and exploration. In reference [27], to overcome these deficiencies, DA combined by a better type of the Nelder-Mead algorithm or so-called INMDA for making the ability of local exploration more powerful, and avoid falling into local optima, the INMDA can be divided into two steps, the first step, DA utilized to explore the solution space. It provided the necessary variety to the artificial dragonflies to find the global optimum.…”
Section: Hybridized Versions Of Dragonfly Algorithmmentioning
confidence: 99%
“…To further show the signi cant superiority of DDS-POBL, we compared it with eight state-of-the-art optimization algorithms, i.e., the particle swarm optimizer (PSO) [28], the piecewise opposition-based learning (POHS) [20], the completely derandomized self-adaptation evolution strategies (CMA-ES) [29], the composite differential evolution (CoDE) [30], the memory-based hybrid dragonfly algorithm (MHDA) [28], the exploration-enhanced grey wolf optimizer (EEGWO) [24], the hybrid method based on the DA and the improved NM simplex algorithm (INMDA) [13], and the firefly algorithm with neighborhood attraction (NaFA) [31]. In this experiment, the parameter settings of EEGWO, INMDA, and DDS-POBL are as follows: the population size is 30, the maximum number of iterations is 500, the number of independent experiments is 50, and the other parameters related to the algorithm are consistent with its original literature.…”
Section: Comparison With Other State-of-the Art Algorithmmentioning
confidence: 99%
“…Some of the famous single solutionbased heuristic global search algorithms are simulated annealing (SA) [2], threshold accepting method (TA) [3], microcanonical Annealing (MA) [4], tabu search (TS) [5], guided local search (GLS) [6], and dynamically dimensioned search (DDS) [7,8]. Population-based ones include evolutionary algorithms (EA) [9], genetic algorithms (GA) [10], particle swarm optimization (PSO) [11], dragonfly algorithm (DA) [12,13], and shuffled complex evolution (SCE) algorithms [14].…”
Section: Introductionmentioning
confidence: 99%
“…In other words, one approach may show very promising results on a particular class of problems, but the same algorithm may show poor results on a different set of problems [24]. Therefore, more researchers improve the current approaches or propose new meta-heuristics for solving different complex problems every year, such as the dragonfly algorithm, is hybridized with the improved Nelder-Mead algorithm (INMDA) for function optimization and multilayer perceptron training [25], the dynamically dimensioned search is improved by embedding with piecewise opposition-based learning (DDS-POBL) for global optimization [26], and this also motivates our attempts in this paper to improve the GWO algorithm for solving complex ELD problems.…”
Section: Introductionmentioning
confidence: 99%