2022
DOI: 10.1109/access.2022.3144431
|View full text |Cite
|
Sign up to set email alerts
|

AOAAO: The Hybrid Algorithm of Arithmetic Optimization Algorithm With Aquila Optimizer

Abstract: Many new algorithms have been proposed to solve the mathematical equations formulated to describe the real-world problems. But there still does not exist one algorithm that could solve the problems all. And most of the proposed algorithms have defects in some aspects, they need to be improved in application. In order to find a more efficient optimization algorithm and inspired by the better performance of the Arithmetic Optimization algorithm (AOA) and Aquila Optimizer (AO), we proposed a hybridization algorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 65 publications
(18 citation statements)
references
References 58 publications
0
18
0
Order By: Relevance
“…Empirical results as evidenced in Figure 1 retrieved from the analysis of [60] demonstrates that Adam works well in practice and compares favorably to other stochastic optimization methods besides bearing minimum training cost, however some people have also used derivative of DDPG for positive optimization [55]. Modern optimization algorithms such as Aquila Optimization Algorithm [61] and Hybrid Algorithm of Arithmetic Optimization Algorithm With Aquila Optimizer (AOAAO) [62], have special applications for machine learning based problem solving However, Adam which possess inherent advantages over the two other extensions of stochastic gradient descent namely Adaptive Gradient Algorithm (AdaGrad) and Root Mean Square Propagation (RMSProp) have been used in current research [63,64]. AdaGrad maintains a per-parameter learning rate which improves performance on problems with sparse gradients.…”
Section: Selection Of Optimizer Algorithmmentioning
confidence: 95%
“…Empirical results as evidenced in Figure 1 retrieved from the analysis of [60] demonstrates that Adam works well in practice and compares favorably to other stochastic optimization methods besides bearing minimum training cost, however some people have also used derivative of DDPG for positive optimization [55]. Modern optimization algorithms such as Aquila Optimization Algorithm [61] and Hybrid Algorithm of Arithmetic Optimization Algorithm With Aquila Optimizer (AOAAO) [62], have special applications for machine learning based problem solving However, Adam which possess inherent advantages over the two other extensions of stochastic gradient descent namely Adaptive Gradient Algorithm (AdaGrad) and Root Mean Square Propagation (RMSProp) have been used in current research [63,64]. AdaGrad maintains a per-parameter learning rate which improves performance on problems with sparse gradients.…”
Section: Selection Of Optimizer Algorithmmentioning
confidence: 95%
“…At this stage, the chosen features are passed into the ANFIS model and are utilized for the recognition of anomalies in the CPS environment. Generally, ANFIS produces a mapping among outputs and inputs by applying "IF-THEN rules" (otherwise called as "Takagi-Sugeno inference model") [19]. As shown, the input of Layer 1 is characterized as x and y, whereby the output of the ith node is signified as O 1i , as follows:…”
Section: Anfis Classificationmentioning
confidence: 99%
“…The behavior of Henry's law [18] 2020 Hide objects game optimization Game to find a hidden object Physicsbased [19] 2011 Galaxy-Based Search Algorithm The spiral arm of spiral galaxies to search [20] 2018 Gravitational Local Search The law of gravity and mass interactions [21] 2012 Charged System Search Principles from physics and mechanics Humanbased [22] 2012 Teaching based learning The influence of a teacher on the output of learners [23] 2018 Socio Evolution\& Learning Optimization Social learning behavior of humans [24] 2011 Brain storm optimization the brainstorming process [25] 2019 Poor\& Rich optimization algorithm the rich to achieve wealth and improve their economic situation. [26] 2021 [30] 2021 GWOHHO Grey wolf + Harris Hawks [31] 2022 AOAAO Aquila + Arithmetic optimization Some hybrid algorithms have been reported to outperform native algorithms in feature selection. Zhang et al [32] proposed a hybrid Aquila Optimizer with Arithmetic Optimization Algorithm (AO-AOA), which provides faster convergence in the best global search and produced better results than native methods.…”
Section: Typementioning
confidence: 99%