2000
DOI: 10.1287/opre.48.6.939.12393
|View full text |Cite
|
Sign up to set email alerts
|

Global Stochastic Optimization with Low-Dispersion Point Sets

Abstract: This study concerns a generic model-free stochastic optimization problem requiring the minimization of a risk function defined on a given bounded domain in a Euclidean space. Smoothness assumptions regarding the risk function are hypothesized, and members of the underlying space of probabilities are presumed subject to a large deviation principle; however, the risk function may well be nonconvex and multimodal. A general approach to finding the risk minimizer on the basis of decision/observation pairs is propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
36
0

Year Published

2002
2002
2013
2013

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(36 citation statements)
references
References 24 publications
0
36
0
Order By: Relevance
“…that satisfy the large deviations principle (cf. e.g., [16], [32]). Assumption L2 can be viewed as a simple extension of L1.…”
Section: Where φ(· ·) Satisfies the Conditions In L1mentioning
confidence: 99%
See 1 more Smart Citation
“…that satisfy the large deviations principle (cf. e.g., [16], [32]). Assumption L2 can be viewed as a simple extension of L1.…”
Section: Where φ(· ·) Satisfies the Conditions In L1mentioning
confidence: 99%
“…A detailed review of various gradient estimation techniques can be found in [20] and [11]. Also of relevance to our work is the low-dispersion point sets method of [32], which uses the idea of quasirandom search for continuous global optimization and large deviation principle to choose the evaluation points within the decision domain and to (adaptively) determine the number of simulation observations to be allocated to these points.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, Yakowitz and Lugosi [35] developed a method that at certain iterations samples new solutions from a fixed global distribution and ensures that every sampled point has "enough" observations, and at other times it adaptively resamples previously sampled points. Yakowitz et al [34] proposed two approaches that use low-dispersion point sets and emphasize how the number of such points should be determined depending on the problem and the simulation budget. Baumert and Smith [5] proposed an approach based on pure random search that estimates the objective function value at each solution θ by averaging all observations within a certain distance from θ and discussed how this distance should decrease in order for the method to converge in probability.…”
Section: Introductionmentioning
confidence: 99%
“…7]), the implementation has been far less successful: no adaptive method appears to have been proposed that is practically implementable in a wide range of general multivariate problems (e.g., " the optimal choice [of gain sequence] involves the Hessian of the risk [loss] function, which is typically unknown and hard to estimate," from Yakowitz et al [44]). Let us summarize some of the existing approaches to illustrate the difficulties.…”
mentioning
confidence: 99%