1985
DOI: 10.1007/bf00941312
|View full text |Cite
|
Sign up to set email alerts
|

Global optimization and stochastic differential equations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
99
0

Year Published

1993
1993
2013
2013

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 209 publications
(100 citation statements)
references
References 3 publications
1
99
0
Order By: Relevance
“…Straightforward calculations show that the evolution of the bias m t a n d t h e v ariance 2 Note that this behavior is really di erent from the behavior in a xed environment. In a xed environment v = 0 the asymptotic bias is negligible if compared with the variance 4 .…”
Section: Mathematical Approachmentioning
confidence: 96%
See 2 more Smart Citations
“…Straightforward calculations show that the evolution of the bias m t a n d t h e v ariance 2 Note that this behavior is really di erent from the behavior in a xed environment. In a xed environment v = 0 the asymptotic bias is negligible if compared with the variance 4 .…”
Section: Mathematical Approachmentioning
confidence: 96%
“…In practice this means that we may still apply the Fokker-Planck approximation if we study learning with just one minimum, but must suppress the temptation to extend this approach to learning with various minima. From the linear Fokker-Planck equation 12 and the asymptotic evolution equations 20 we conclude that the asymptotic probability distribution for small learning parameters is a simple Gaussian, with its average at the xed point w and a covariance matrix 2 In this section we will point out the di erence between the "intrinsic" noise due to the random presentation of training patterns and the "arti cial" noise in studies on the generalization capabilities of neural networks see e.g. 57, 64 .…”
Section: A Small-uctuations Expansionmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, the algorithms of Piyavskii-Shubert [l, 2] and its variants [3], Mladineo [4], Wood [5], Brent [6], and Breiman & Cutler [7], interval methods [8,9]), and "standard" simulated annealing ( [10] for discrete setting, [11,12] for continuous). All of these algorithms share the properties that they require global information in the form of a parameter ( e.g.…”
Section: Introductionmentioning
confidence: 99%
“…We adopt the suggestive name of optimotaxis to designate this search procedure. An important feature of optimotaxis is that it can be used with a broad class of signal profiles, including the ones with multiple maxima, a feature that is shared with a few other stochastic optimization algorithms which are not constrained by vehicle kinematics [14,15].…”
Section: Introductionmentioning
confidence: 99%