2019
DOI: 10.1016/j.jmva.2019.104534
|View full text |Cite
|
Sign up to set email alerts
|

Composite likelihood estimation for a Gaussian process under fixed domain asymptotics

Abstract: We study composite likelihood estimation of the covariance parameters with data from a one-dimensional Gaussian process with exponential covariance function under fixed domain asymptotics. We show that the weighted pairwise maximum likelihood estimator of the microergodic parameter can be consistent or inconsistent, depending on the range of admissible parameter values in the likelihood optimization. On the contrary, the weighted pairwise conditional maximum likelihood estimator is always consistent. Both esti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 53 publications
0
5
0
Order By: Relevance
“…Further work that is of interest following these findings includes deriv-ing the sandwich covariance matrix for weighted versions of the composite likelihood functions. Our developments can be extended to allow for general weights, but the sandwich covariance matrix and subsequent efficiency expressions will likely be less interpretable except under specific weight configurations such as binary weights (Bevilacqua and Gaetan, 2015) or optimal weights (Bachoc, Bevilacqua and Velandia, 2019;Pace, Salvan and Sartori, 2019). One particular set of weights that would be of interest is the composite full conditional likelihood with alternating binary weights, in line with the work of Besag (1974).…”
Section: Discussionmentioning
confidence: 99%
“…Further work that is of interest following these findings includes deriv-ing the sandwich covariance matrix for weighted versions of the composite likelihood functions. Our developments can be extended to allow for general weights, but the sandwich covariance matrix and subsequent efficiency expressions will likely be less interpretable except under specific weight configurations such as binary weights (Bevilacqua and Gaetan, 2015) or optimal weights (Bachoc, Bevilacqua and Velandia, 2019;Pace, Salvan and Sartori, 2019). One particular set of weights that would be of interest is the composite full conditional likelihood with alternating binary weights, in line with the work of Besag (1974).…”
Section: Discussionmentioning
confidence: 99%
“…Perhaps the most successful approximation is Vecchia's method [163], which has attracted a remarkable amount of attention in recent times [inc. 147,48,49,67,47]. The Vecchia approximation can be used with any correlation model and its basic idea is is to replace (13) with a product of Gaussian conditional distributions, in which each conditional distribution involves only a small subset of the data. This approximation requires that the data are ordered and the number m of 'previous' data on which to condition is to be specified.…”
Section: Approximate Likelihood and The Matérn Modelmentioning
confidence: 99%
“…See the recent review of Katzfuss and Guinness [83] for further detail. The Vecchia likelihood can be viewed as a specific instance of a more general class of estimation methods called quasi-or composite likelihood [103,162] that have been widely used for the estimation of Gaussian fields with the Matérn model [52,26,13].…”
Section: Approximate Likelihood and The Matérn Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Our work has important relations to several research topics. First, our theory can be viewed as the Bayesian counterpart of the frequentist fixed-domain asymptotic theory on the maximum likelihood estimator in Ying [80], Ying [81], Zhang [82], Chen et al [14], Loh [46], Du et al [21], Wang and Loh [75], Kaufman and Shaby [38], Chang et al [12], Velandia et al [73], Bachoc et al [3], Bachoc and Lagnoux [4], etc. Second, our posterior asymptotic efficiency result is a counterpart of Stein's work in the Bayesian setup and guarantees the optimal estimation of prediction MSE.…”
Section: Our Contributionsmentioning
confidence: 99%