1990
DOI: 10.1017/s0269964800001674
|View full text |Cite
|
Sign up to set email alerts
|

Improving Stochastic Relaxation for Gussian Random Fields

Abstract: In this paper, we are concerned with the simulation of Gaussian random fields by means of iterative stochastic algorithms, which are compared in terms of rate of convergence. A parametrized class of algorithms, which includes stochastic relaxation (Gibbs sampler), is proposed and its convergence properties are established. A suitable choice for the parameter improves the rate of convergence with respect to stochastic relaxation for special classes of covariance matrices. Some examples and numerical experiments… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
41
0

Year Published

1992
1992
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(42 citation statements)
references
References 10 publications
1
41
0
Order By: Relevance
“…An early reference for variance reduction for Markov chain samplers is Green and Han (1992), who exploited an idea of Barone and Frigessi (1989) and constructed antithetic variables that may achieve variance reduction in simple settings but do not appear to be widely applicable. Andradòttir et al.…”
Section: Introductionmentioning
confidence: 99%
“…An early reference for variance reduction for Markov chain samplers is Green and Han (1992), who exploited an idea of Barone and Frigessi (1989) and constructed antithetic variables that may achieve variance reduction in simple settings but do not appear to be widely applicable. Andradòttir et al.…”
Section: Introductionmentioning
confidence: 99%
“…Let ϑ=(ϑ1,,ϑn)normalT be a parameter vector with normal full conditional distributions ϑi|bold-italicϑiNfalse(μi,σi2false), where the conditional mean μ i and the conditional variance σi2 may depend on bold-italicϑi=false{ϑj:j=1,,n,jifalse}. Adler () and Barone and Frigessi () introduced an overrelaxation method where the update on ϑ is performed by using Gibbs sampling, and where the new value ϑi for each margin of ϑ is generated asϑi=false(1+κfalse)μiκϑi+uσifalse(1κ2false),i=1,,n,with uN(0,1) being a standard normal random variable. Equation enables the introduction of dependence between successive samples via the constant antithetic parameter κ , which is required to be in the open interval (−1,1) so that the Markov chain is ergodic and produces π ( ϑ ) as its stationary distribution.…”
Section: A Deterministic Proposal Distributionmentioning
confidence: 99%
“…Variance reduction in estimating E[f(ϑ)] is achieved through the antithetic variable method (Hammersley and Morton, ) by setting κ >0 so that the estimation bias in the previous sample is corrected in the opposite direction. The rate of convergence for the overrelaxation method in expression (11) was studied in Barone and Frigessi (), whereas Green and Han () established that the asymptotic variance of the estimator for E[f(ϑ)] by using this strategy for linear f is proportional to (1− κ )/(1+ κ ).…”
Section: A Deterministic Proposal Distributionmentioning
confidence: 99%
See 1 more Smart Citation
“…Their behaviors are far from well understood [2,5,6,7,12,15]. In this paper we shall consider the Gibbs sampler in the Euclidean space.…”
Section: Introductionmentioning
confidence: 99%