2022
DOI: 10.3390/math10050800
|View full text |Cite
|
Sign up to set email alerts
|

SPGD: Search Party Gradient Descent Algorithm, a Simple Gradient-Based Parallel Algorithm for Bound-Constrained Optimization

Abstract: Nature-inspired metaheuristic algorithms remain a strong trend in optimization. Human-inspired optimization algorithms should be more intuitive and relatable. This paper proposes a novel optimization algorithm inspired by a human search party. We hypothesize the behavioral model of a search party searching for a treasure. Motivated by the search party’s behavior, we abstract the “Divide, Conquer, Assemble” (DCA) approach. The DCA approach allows us to parallelize the traditional gradient descent algorithm in a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 45 publications
0
2
0
Order By: Relevance
“…Inspired by this concept of parameter adaptation to overcome the problems of local minima and slow convergence rate, an enhanced simulated annealing (ESA) algorithm has been implemented in this work. By enhancing the convergence rates, a wide range of machine learning models can be explored to address the challenges in DSP (Syed Shahul Hameed and Rajagopalan, 2022, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…Inspired by this concept of parameter adaptation to overcome the problems of local minima and slow convergence rate, an enhanced simulated annealing (ESA) algorithm has been implemented in this work. By enhancing the convergence rates, a wide range of machine learning models can be explored to address the challenges in DSP (Syed Shahul Hameed and Rajagopalan, 2022, 2023).…”
Section: Related Workmentioning
confidence: 99%
“…To address the limited number of samples in the dataset, data augmentation techniques were utilized, which involved applying various transformations such as rotation (both horizontal and vertical), scaling, and flipping. The optimizer and loss function are the key elements that provide the network with the ability to handle large amounts of data and regulate the learning speed [49]. Different optimizers such as Adam, Stochastic Gradient Descend, and RMSprop were applied and found that the Adam optimizer has demonstrated superior performance.…”
Section: Densenet-121mentioning
confidence: 99%