2010
DOI: 10.1109/tac.2010.2078170
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Optimization on Continuous Domains With Finite-Time Guarantees by Markov Chain Monte Carlo Methods

Abstract: We introduce bounds on the finite-time performance of Markov chain Monte Carlo algorithms in approaching the global solution of stochastic optimization problems over continuous domains.A comparison with other state-of-the-art methods having finite-time guarantees for solving stochastic programming problems is included.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 44 publications
0
12
0
Order By: Relevance
“…Simulation-based optimization algorithms for Markov-chains help to maximize the average reward of a parameterized Markov-chain [67,68]. Markov-chain models can be combined with Monte Carlo algorithms to solve global stochastic optimization problems defined over continuous domains [69]. An important scientific issue is the control of Markovian processes.…”
Section: Markov-chain Modellingmentioning
confidence: 99%
“…Simulation-based optimization algorithms for Markov-chains help to maximize the average reward of a parameterized Markov-chain [67,68]. Markov-chain models can be combined with Monte Carlo algorithms to solve global stochastic optimization problems defined over continuous domains [69]. An important scientific issue is the control of Markovian processes.…”
Section: Markov-chain Modellingmentioning
confidence: 99%
“…Many studies are devoted to the models and methods of detection based on the use of Markov chains [20][21][22][23]. The typical disadvantage of most SRCA that are proposed in these studies is lack of an opportunity to quickly replenish the repository of cyberattacks' patterns, as they almost always use only one methodology of recognition.…”
Section: Literature Review and Problem Statementmentioning
confidence: 99%
“…In our case, it is exponential of the form π ∞ (x) = 1 M e −βV (x) [14], where M is the (unknown) normalization constant. For the first part, recently, a result on finite-time guarantees for SA optimization over continuous domains was obtained in [28]. Informally, given a desired precision of the optimization, it provides a number of samples after which we are guaranteed that the minimum so far is within the desired precision of the global minimum.…”
Section: Practical Considerationsmentioning
confidence: 99%