2004
DOI: 10.1046/j.1369-7412.2003.05512.x
|View full text |Cite
|
Sign up to set email alerts
|

Approximating Likelihoods for Large Spatial Data Sets

Abstract: Likelihood methods are often difficult to use with large, irregularly sited spatial data sets, owing to the computational burden. Even for Gaussian models, exact calculations of the likelihood for "n" observations require "O"("n"-super-3) operations. Since any joint density can be written as a product of conditional densities based on some ordering of the observations, one way to lessen the computations is to condition on only some of the 'past' observations when computing the conditional densities. We show ho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
383
1

Year Published

2010
2010
2022
2022

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 368 publications
(391 citation statements)
references
References 27 publications
7
383
1
Order By: Relevance
“…When all sampling locations are on a (near-)regular lattice, spectral methods to approximate the likelihood can be used and allow to reduce the computational cost to an order of O(n log(n)) [31,13,28]. These techniques cannot be applied to scattered data, but other approaches to approximating likelihoods [79,78,5,46], covariance tapering [29], or simplified Gaussian models of low rank [3,12,20] have been proposed and have been shown to be quite effective in reducing the computational effort to an order that allows the application of REML in most practical situations.…”
Section: Kernel Selection and Parameter Estimationmentioning
confidence: 99%
“…When all sampling locations are on a (near-)regular lattice, spectral methods to approximate the likelihood can be used and allow to reduce the computational cost to an order of O(n log(n)) [31,13,28]. These techniques cannot be applied to scattered data, but other approaches to approximating likelihoods [79,78,5,46], covariance tapering [29], or simplified Gaussian models of low rank [3,12,20] have been proposed and have been shown to be quite effective in reducing the computational effort to an order that allows the application of REML in most practical situations.…”
Section: Kernel Selection and Parameter Estimationmentioning
confidence: 99%
“…The M data points chosen from X n−1 are typically those closest to x n . If the optimal scale is expected to be large, however, the approximation can be improved considerably if some points further apart are also included (see [17] for a detailed discussion).…”
Section: Computational Issuesmentioning
confidence: 99%
“…CL methods are an attractive option when the full likelihood is difficult to write and/or when the data sets are large. This approach has been used in several spatial and space-time contexts, mainly in the Gaussian case (Vecchia, 1988;Curriero and Lele, 1999;Stein et al, 2004;Bevilacqua et al, 2012;Bevilacqua and Gaetan, 2015;Bevilacqua et al, 2016). Outside the Gaussian scenario, Heagerty and Lele (1998) propose CL inference for binary spatial data.…”
Section: Introductionmentioning
confidence: 99%