2022
DOI: 10.1007/978-3-031-00832-0_3
|View full text |Cite
|
Sign up to set email alerts
|

Recent Theoretical Advances in Non-Convex Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(12 citation statements)
references
References 111 publications
1
11
0
Order By: Relevance
“…where the last approximation is valid for n ≫ 1. Thus, the probability to find beneficial moves, P(U i > 0|D) ≃ e −n/N , decreases exponentially with n. Note that if n = 0 (no random trials have been made), equation (8) yields P(U i > 0|D) = N /(N + 1), consistent with the prior probability in equation ( 3) which assigns equal weights to all N + 1 values of U i . Thus, in the beginning the system is very optimistic that a beneficial move will be found.…”
Section: Bayesian Estimation Of the Probability To Find A Novel Benef...supporting
confidence: 57%
See 1 more Smart Citation
“…where the last approximation is valid for n ≫ 1. Thus, the probability to find beneficial moves, P(U i > 0|D) ≃ e −n/N , decreases exponentially with n. Note that if n = 0 (no random trials have been made), equation (8) yields P(U i > 0|D) = N /(N + 1), consistent with the prior probability in equation ( 3) which assigns equal weights to all N + 1 values of U i . Thus, in the beginning the system is very optimistic that a beneficial move will be found.…”
Section: Bayesian Estimation Of the Probability To Find A Novel Benef...supporting
confidence: 57%
“…The number of variables in X may be large in real-world applications and F(X) may be costly to evaluate, making it highly desirable to develop efficient global optimization strategies. In general, global optimization is an NP-hard problem [8] and there is no guarantee of finding the global maximum in a reasonable number of steps. Here, we focus on systems with discrete states (such as spin glasses [9,10]) for which the analytical gradient of the fitness function is not available.…”
Section: Introductionmentioning
confidence: 99%
“…This inequality is sometimes sufficient by itself for convergence analysis of optimization methods and is often exploited, for example, in the context of weakly convex optimization (see, e.g. [2,6] and the references therein). In [19], a natural extension of this inequality to the case of nonsmooth hypodifferentiable convex functions was proposed and studied.…”
Section: Lipschitzian Approximations and Lipschitz Continuous Hypodif...mentioning
confidence: 99%
“…However, some of the projections involve nonconvex sets. Hence, the algorithm becomes stuck in local minima 12 . Although other methods have been developed to overcome these limitations by using the tools of modern optimization theory, including semi-definite programming-based (SDP) approaches [13][14][15] , regularization-based methods 16,17 , global optimization methods 18 and Wirtinger flow and its variants 19 , the computational complexity of these PR algorithms is large, and it is time-consuming to converge to a solution with high confidence.…”
Section: Introductionmentioning
confidence: 99%