2015
DOI: 10.1137/140961602
|View full text |Cite
|
Sign up to set email alerts
|

Direct Search Based on Probabilistic Descent

Abstract: Direct-search methods are a class of popular derivative-free algorithms characterized by evaluating the objective function using a step size and a number of (polling) directions. When applied to the minimization of smooth functions, the polling directions are typically taken from positive spanning sets which in turn must have at least n+1 vectors in an n-dimensional variable space. In addition, to ensure the global convergence of these algorithms, the positive spanning sets used throughout the iterations are r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

10
126
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 65 publications
(136 citation statements)
references
References 24 publications
(44 reference statements)
10
126
0
Order By: Relevance
“…Such a bound, when m is much smaller than n, is better than the optimal situation that we already proved for the bound (4). The advantage was also observed in the numerical results of [14]. This comparison, more rigorously made given the contribution of our paper, suggests that randomization has the potential to improve the efficiency of some classical algorithms.…”
Section: Final Remarkssupporting
confidence: 70%
See 1 more Smart Citation
“…Such a bound, when m is much smaller than n, is better than the optimal situation that we already proved for the bound (4). The advantage was also observed in the numerical results of [14]. This comparison, more rigorously made given the contribution of our paper, suggests that randomization has the potential to improve the efficiency of some classical algorithms.…”
Section: Final Remarkssupporting
confidence: 70%
“…The optimality of D ⊕ for the case m = 2n is already proved when n = 3 (see [11,Theorem 5.4.1]) and n = 4 (see [8,Theorem 6.7.1]), but it is open when n ≥ 5 according to [3] (see also [2,Page 194] and [4,Conjecture 1.3]). Instead of PSSs, Gratton et al [14] propose to use random polling directions in Algorithm 2.1. When minimizing a smooth (possibly non-convex) objective function, the resulting algorithm enjoys an O(mnϵ −2 ) WCC bound for the number of function evaluations (with overwhelmingly high probability), where m is the number of random directions used in each poll step.…”
Section: Final Remarksmentioning
confidence: 99%
“…Recently several methods for unconstrained black-box optimization have been proposed, which rely on random models or directions [1,13,16], but are applied to deterministic functions. In this paper we take this line of work one step further by establishing expected convergence rates for several schemes based on one generic analytical framework.…”
Section: Introductionmentioning
confidence: 99%
“…A set of directions is probabilistically descent if at least one of them makes an acute angle with the negative gradient with a certain probability. Direct search based on probabilistic descent has been proved globally convergent with probability one [77]. Polling based on a reduced number of randomly generated directions (which can go down to two) satisfies the theoretical requirements [77] and can provide numerical results that compare favorable to the traditional use of PSSs.…”
Section: Models and Descent Of Probabilistic Typementioning
confidence: 99%
“…It has been proved in [77] that both probabilistic approaches (for trust regions and direct search) enjoy, with overwhelmingly high probability, a gradient decaying rate of 1/ √ k or, equivalently, that the number of iterations taken to reach a gradient of size ϵ is O(ϵ −2 ). Interestingly, the WCC bound in terms of function evaluations for direct search based on probabilistic descent is reduced to O(nmϵ −2 ), where m is the number of random poll directions [77].…”
Section: Models and Descent Of Probabilistic Typementioning
confidence: 99%