2022
DOI: 10.1016/j.ins.2022.06.036
|View full text |Cite
|
Sign up to set email alerts
|

An ensemble of differential evolution and Adam for training feed-forward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 102 publications
(33 citation statements)
references
References 43 publications
0
25
0
Order By: Relevance
“…The aim is to reach the minimum of that cost function by taking small steps in the direction of the negative gradient. One drawback is their tendency to converge easily towards local optima [5]. SGD [6] as one of the most favored gradientbased algorithms for training NNs, also suffers from early convergence.…”
Section: A Convolutional Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…The aim is to reach the minimum of that cost function by taking small steps in the direction of the negative gradient. One drawback is their tendency to converge easily towards local optima [5]. SGD [6] as one of the most favored gradientbased algorithms for training NNs, also suffers from early convergence.…”
Section: A Convolutional Neural Networkmentioning
confidence: 99%
“…To avert premature convergence, a wide range of adaptive gradient algorithms have been developed that adjust the learning rate in efficient ways, one worth mentioning is Adam. The issue of premature convergence has been tackled in past literature through trying to improve the global search of gradient descent methods [5], [7]. All of the proposed solutions in these respective papers focus on hybridizing the very effective convergence speed of gradient descent with the gradient-less global search of meta-heuristic optimization algorithms.…”
Section: A Convolutional Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Various MHs have been proposed and evaluated over the last decades to solve a colorful palette of challenging problems [10]. However, some of the newly developed metaheuristics do not differ substantially from the general structures of well-reputed and conventional MHs such as Simulated Annealing (SA) [11], Differential Evolution (DE) [12], Genetic Algorithms (GA) [13], Grey-Wolf Optimizer (GWO) [14], and Particle Swarm Optimization (PSO) [15], to mention a few.…”
Section: Introductionmentioning
confidence: 99%
“…A solution was proposed in terms of an ensemble of differential evolution and Adam (EDEAdam), combining both Adam optimizer and differential evolution algorithm, which forms a robust and efficient search mechanism to achieve better results in both global and local search. The integration of these two methods not only helped improve results but also showed faster convergence speed (Xue et al, 2022).…”
Section: Introductionmentioning
confidence: 99%