Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/443
|View full text |Cite
|
Sign up to set email alerts
|

Evolutionary Gradient Descent for Non-convex Optimization

Abstract: Non-convex optimization is often involved in artificial intelligence tasks, which may have many saddle points, and is NP-hard to solve. Evolutionary algorithms (EAs) are general-purpose derivative-free optimization algorithms with a good ability to find the global optimum, which can be naturally applied to non-convex optimization. Their performance is, however, limited due to low efficiency. Gradient descent (GD) runs efficiently, but only converges to a first-order stationary point, which may be a saddle poin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…27 GD and EA can be combined in evolutionary gradient descent (EGD) methods. 28 EGD methods benefit from the merits of both GA and EA: they work well under nonconvexity (by escaping local solutions and variable-specific saddle points) while converging faster than EA methods. 28 In general, the speed to convergence depends on the nature of the optimization problem and the specifics of the iteration scheme in ( 14) and (15).…”
Section: Iterative Algorithmmentioning
confidence: 99%
See 3 more Smart Citations
“…27 GD and EA can be combined in evolutionary gradient descent (EGD) methods. 28 EGD methods benefit from the merits of both GA and EA: they work well under nonconvexity (by escaping local solutions and variable-specific saddle points) while converging faster than EA methods. 28 In general, the speed to convergence depends on the nature of the optimization problem and the specifics of the iteration scheme in ( 14) and (15).…”
Section: Iterative Algorithmmentioning
confidence: 99%
“…28 EGD methods benefit from the merits of both GA and EA: they work well under nonconvexity (by escaping local solutions and variable-specific saddle points) while converging faster than EA methods. 28 In general, the speed to convergence depends on the nature of the optimization problem and the specifics of the iteration scheme in ( 14) and (15). In this context, knowing the structure of the optimization problem can help speed up convergence.…”
Section: Iterative Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…Benavoli, Azzimonti, and Piga (2021) improve the optimization performance of PBO by using SkewGP model to fit the preference function but still face the scalability issue. Sui et al (2017) and Xu et al (2020) propose to fix one solution when dueling throughout the optimization process, i.e., kernel-self-sparring (KSS) and comp-GP-UCB. In (Sui et al 2017), KSS substitutes the preference function in PBO with the function whose value is the probability of one solution beating the optimal solution.…”
Section: Introductionmentioning
confidence: 99%