2012
DOI: 10.1093/imanum/drs027
|View full text |Cite
|
Sign up to set email alerts
|

Smoothing and worst-case complexity for direct-search methods in nonsmooth optimization

Abstract: In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportional to the inverse of the threshold squared.Motivated by the lack of such a result in the non-smooth case, we propose, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
52
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 38 publications
(53 citation statements)
references
References 34 publications
1
52
0
Order By: Relevance
“…Cartis, Gould, and Toint [2] have derived a WCC bound of O(n 2 −3/2 ) for their adaptive cubic overestimation algorithm when using finite differences to approximate derivatives. In the non-smooth case, using smoothing techniques, both Garmanjani and Vicente [6] and Nesterov [12] established a WCC bound of approximately O( −3 ) iterations for their zero-order methods, where the threshold refers now to the gradient of a smoothed version of the original function.…”
Section: Introductionmentioning
confidence: 99%
“…Cartis, Gould, and Toint [2] have derived a WCC bound of O(n 2 −3/2 ) for their adaptive cubic overestimation algorithm when using finite differences to approximate derivatives. In the non-smooth case, using smoothing techniques, both Garmanjani and Vicente [6] and Nesterov [12] established a WCC bound of approximately O( −3 ) iterations for their zero-order methods, where the threshold refers now to the gradient of a smoothed version of the original function.…”
Section: Introductionmentioning
confidence: 99%
“…They prove that it takes at most O( −2 ) iterations to reduce a first order criticality measure below in a first order trust region method or a quadratic regularization method, where the worst-case complexity result is the same in order as the function evaluation complexity of steepest descent methods applied to the case that Φ h is differentiable. In [19], Garmanjani and Vicente propose a smoothing direct search (DS) algorithm based on smoothing techniques and derivative-free methods to solve a general unconstrained nonsmooth nonconvex, Lipschitzian minimization problem. The smoothing DS algorithm can be seen as a zero order method, where the gradient is not calculated in the algorithm.…”
Section: 2) φ H (X) := H(x) + H(c(x))mentioning
confidence: 99%
“…In this paper, we present a smoothing quadratic regularization (SQR) algorithm for solving (1.1) with the worst-case complexity estimation. The SQR algorithm uses the smoothing functions [2,10,19,26,30] and regularization methods [6,7,8,28]. At each iteration, the SQR algorithm solves a strongly convex quadratic minimization problem with a diagonal Hessian matrix, which has a simple closed-form solution that is inexpensive to calculate.…”
Section: 2) φ H (X) := H(x) + H(c(x))mentioning
confidence: 99%
See 2 more Smart Citations