1998
DOI: 10.1023/a:1022612511618
|View full text |Cite
|
Sign up to set email alerts
|

Global Optimization Requires Global Information

Abstract: There are many global optimization algorithms which do not use global information. We broaden previous results, showing limitations on such algorithms, even if allowed to run forever. We show deterministic algorithms must sample a dense set to fi nd the global optimum value and can never be guaranteed to converge only to global optimizers. Further, analogous results show introducing a stochastic element does not overcome these limitations. An example is simulated annealing in practice. Our results show there a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
39
1

Year Published

2001
2001
2014
2014

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(41 citation statements)
references
References 16 publications
1
39
1
Order By: Relevance
“…In either case, f (X * n (ω)) −→ f * for the given ω ∈ N c . Thus, f (X * n ) −→ f * a.s. Törn and Zilinskas (1989) ) that is different from the theorem proved by Stephens and Baritompa (1998).…”
Section: P (Algorithm Sees the Global Minimum Of G) ≥ P ∀G ∈ C(d)contrasting
confidence: 61%
See 1 more Smart Citation
“…In either case, f (X * n (ω)) −→ f * for the given ω ∈ N c . Thus, f (X * n ) −→ f * a.s. Törn and Zilinskas (1989) ) that is different from the theorem proved by Stephens and Baritompa (1998).…”
Section: P (Algorithm Sees the Global Minimum Of G) ≥ P ∀G ∈ C(d)contrasting
confidence: 61%
“…The purpose of this section is to explore some connections between the results in Section 3 and the results in a paper by Stephens and Baritompa (1998). Recall the notation in Section 3.…”
Section: Related Convergence Resultsmentioning
confidence: 94%
“…The focus of active learning is usually to learn better predictive models rather than to perform optimization. Reinforcement learning [190] is broadly concerned with what set of actions to take in an environment to maximize some notion of cumulative reward. Reinforcement learning methods have strong connections to information theory, optimal control, and statistics.…”
Section: Derivative-freementioning
confidence: 99%
“…However, for the sake of completeness we prove in this section the convergence of the above described branching procedure, independently of the algorithm employed to evaluate the nodes of the tree, when we remove the node deletion step and, consequently, run the algorithm for an infinite amount of time. As remarked, e.g., in [20,22], in all cases where no global information about the objective function, such as the value of its Lipschitz constant, is available, the only way to ensure that a GO method is convergent is that the set of points at which the function is observed is dense within the feasible region. In the branching procedure described above this can be guaranteed if the branching mechanism is exhaustive, i.e., each nested sequence of subdomains in F converges to a single point if the algorithm is never stopped (see, e.g., [11]).…”
Section: Convergence Of the Branching Algorithmmentioning
confidence: 99%