2017
DOI: 10.1007/s10898-017-0544-7
|View full text |Cite
|
Sign up to set email alerts
|

On the convergence rate issues of general Markov search for global minimum

Abstract: This paper focuses on the convergence rate problem of general Markov search for global minimum. Many of existing methods are designed for overcoming a very hard problem which is how to efficiently localize and approximate the global minimum of the multimodal function f while all information which can be used are the f -values evaluated for generated points. Because such methods use poor information on f , the following problem may occur: the closer to the optimum, the harder to generate a "better" (in sense of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…The above illustrative example leads to natural generalization. Pure Random Search is typical example of slow convergence and its bad convergence properties follow from more general idea called the lazy convergence, a concept introduced in [24].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The above illustrative example leads to natural generalization. Pure Random Search is typical example of slow convergence and its bad convergence properties follow from more general idea called the lazy convergence, a concept introduced in [24].…”
Section: Discussionmentioning
confidence: 99%
“…Example 6: Lazy convergence. Now we will adopt the definition from [24] to Markov chains and define the lazy convergence as the property of probability kernel P . We will say that X t converges lazily to x 0 iff X t P → x 0 and :…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…2 Now we will focus on Markov chains. We will start from the following definition which transfers the definition from [28] to Markov chains with probability kernel P .…”
mentioning
confidence: 99%
“…The intuition behind the above is: the closer to the global solution, the harder to generate a better candidate than the current one because an optimization method uses poor information on the function f . Various lazy examples may be found in [28] ( including PRS and Algorithm 4 used for simulations below). Lazy methods do not converge exponentially fast for any nontrivial problem function h, see below.…”
mentioning
confidence: 99%