2011
DOI: 10.1007/978-3-642-20859-1_4
|View full text |Cite
|
Sign up to set email alerts
|

Derivative-Free Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
39
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 71 publications
(39 citation statements)
references
References 54 publications
0
39
0
Order By: Relevance
“…This sampling step can be replaced by an optimization method such as Trust Region [36][37][38] or Line Search [39,40]. However, in order for it to work, some regularities must be satisfied by the gradient approximation (18b) [41][42][43][44][45], for instance, smoothness, which in practice is not necessarily the case.…”
Section: Sampling Method-approaching the Posteriormentioning
confidence: 99%
“…This sampling step can be replaced by an optimization method such as Trust Region [36][37][38] or Line Search [39,40]. However, in order for it to work, some regularities must be satisfied by the gradient approximation (18b) [41][42][43][44][45], for instance, smoothness, which in practice is not necessarily the case.…”
Section: Sampling Method-approaching the Posteriormentioning
confidence: 99%
“…Often, work involves evaluating different solvers for smooth or noisy constraint functions [7] or interpolation and approximation [6]. This paper takes the latter approach, by sampling the function for the constraint instead of knowing it outright.…”
Section: Derivative Free Optimizationmentioning
confidence: 99%
“…Before we derive the numerical optimization algorithm, we first present the following definition of simplex gradient [34]. …”
Section: Derivative-free Optimizationmentioning
confidence: 99%
“…Gradient, or its estimation, based extremum seeking control is the most straightforward approach [32,33], but the requirement for continuous measurement and approximation of the gradient or the Hessian is very strong. On the other hand, rather than approximating the gradient or the Hessian, derivative-free numerical optimization methods rely on building an approximate model of the objective function based on the sample function values [34].…”
Section: Introductionmentioning
confidence: 99%