2021
DOI: 10.1080/01621459.2020.1858838
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Transformed Scale Mixtures for Flexible Modeling of Spatial Extremes on Datasets With Many Locations

Abstract: Flexible spatial models that allow transitions between tail dependence classes have recently appeared in the literature. However, inference for these models is computationally prohibitive, even in moderate dimensions, due to the necessity of repeatedly evaluating the multivariate Gaussian distribution function. In this work, we attempt to achieve truly high-dimensional inference for extremes of spatial processes, while retaining the desirable flexibility in the tail dependence structure, by modifying an establ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…It takes into account the spatiotemporal structure, using the distance between dependent clusters, by means of a dissimilarity measure designed to handle missing data. Note that other papers tackle the problem of missing data, using hierarchical max-stable process construction such as in Reich and Shaby (2012) or conditioning of a latent process in the context of extremes, such as in Zhang et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…It takes into account the spatiotemporal structure, using the distance between dependent clusters, by means of a dissimilarity measure designed to handle missing data. Note that other papers tackle the problem of missing data, using hierarchical max-stable process construction such as in Reich and Shaby (2012) or conditioning of a latent process in the context of extremes, such as in Zhang et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…Computational speed-up can be obtained for censored likelihood approaches by exploiting pseudo-Monte Carlo methods in the calculation of multivariate Gaussian or Student's t distributions (de Fondeville and Davison, 2018;Beranger et al, 2019); by adding a measurement error term to the model like in Zhang et al (2019); or even by using proper scoring rules instead of maximum likelihood as in de Fondeville and Davison (2018). However, to tackle problems in truly higher dimensions, sparse models with a fundamentally different probabilistic structure (Engelke and Hitz, 2020;Engelke and Ivanovs, 2021) need to be devised.…”
Section: Discussionmentioning
confidence: 99%
“…(2017) showed how to exploit (24) to perform censored likelihood inference based on high threshold exceedances for this class of models, but this remains fairly intensive in moderate dimensions (roughly D > 30) in cases where the random variable R cannot be integrated out in explicit form and (uni-dimensional) numerical integrals are thus required. Nevertheless, Zhang et al (2019) recently showed how to bypass the explicit integral in (24) and to fit these models more efficiently on many locations by adopting the Bayesian perspective and adding a measurement error term (i.e., a "nugget effect") to the model. Another appealing property of Gaussian scale mixture models is that they are easily amenable to unconditional or conditional simulation, which is typically required for the evaluation of spatial risk measures and for spatial prediction.…”
Section: Random Scale Mixtures and Related Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Here r ∈ [0, 1] is the ratio of spatial to total variation. The multivariate nugget term tackles the censoring in arsenic log-concentration, thereby circumnavigating computational burden occurring due to censored likelihoods (Hazra et al, 2018;Yadav et al, 2019;Zhang et al, 2021).…”
Section: Methodsmentioning
confidence: 99%