2008
DOI: 10.1016/j.spl.2008.02.034
|View full text |Cite
|
Sign up to set email alerts
|

Almost sure convergence of randomly truncated stochastic algorithms under verifiable conditions

Abstract: Abstract. In this paper, we are interested in the almost sure convergence of randomly truncated stochastic algorithms. In their pioneer work, Chen and Zhu (1986) required that the family of the noise terms is summable to ensure the convergence. In our paper, we present a new convergence theorem which extends the already known results by making vanish this condition on the noise terms -a condition which is quite hard to check in practice. The aim of this work is to prove an almost sure convergence result of ran… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(24 citation statements)
references
References 8 publications
0
24
0
Order By: Relevance
“…So, (17) and (18) tell us A(θ ) ≤ c 2 c 3 n p , and we deduce the existence of a deterministic constant c 4 depending only on p, η, T , b, σ and ψ such that…”
Section: Unconstrained Stochastic Algorithmmentioning
confidence: 52%
See 1 more Smart Citation
“…So, (17) and (18) tell us A(θ ) ≤ c 2 c 3 n p , and we deduce the existence of a deterministic constant c 4 depending only on p, η, T , b, σ and ψ such that…”
Section: Unconstrained Stochastic Algorithmmentioning
confidence: 52%
“…The first one, based on a truncation procedure called "Projection à la Chen", is introduced by Chen in [6,7] and investigated later by several authors (see, e.g., Andrieu, Moulines and Priouret in [1] and Lelong in [17]). The use of this procedure in the context of importance sampling is initially proposed by Arouna in [2] and investigated afterward by Lapeyre and Lelong in [16].…”
Section: Introductionmentioning
confidence: 99%
“…The new idea of using one importance sampling parameter per level was later taken up in [6] but coupled with stochastic approximation to build adaptive estimators. Actually, minimizing λ −→ σ 2 (λ) can be achieved by using the randomly truncated Robbins Monro algorithm proposed by Chen et al [7,8] and later investigated in the context of importance sampling by Lapeyre and Lelong [25] and Lelong [26]. The numerical stability of these stochastic algorithms strongly depends on the choice of the descent step -often referred to as the gain sequence -which proves to be highly sensitive in practice.…”
Section: 2)mentioning
confidence: 99%
“…Note that the idea of truncations goes back to Khasminskii and Nevelson (1972) and Fabian (1978) (see also Chen and Zhu (1986), Chen et al(1987), Andradóttir (1995), Sharia (1997), Tadic (1997Tadic ( ,1998, Lelong (2008). A comprehensive bibliography and some comparisons can be found in Sharia (2014)).…”
Section: Introductionmentioning
confidence: 99%