2016
DOI: 10.1016/j.csda.2014.07.018
|View full text |Cite
|
Sign up to set email alerts
|

Fast computation of the deviance information criterion for latent variable models

Abstract: The deviance information criterion (DIC) has been widely used for Bayesian model comparison. However, recent studies have cautioned against the use of certain variants of the DIC for comparing latent variable models. For example, it has been argued that the conditional DIC-based on the conditional likelihood obtained by conditioning on the latent variables-is sensitive to transformations of latent variables and distributions. Further, in a Monte Carlo study that compares various Poisson models, the conditional… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
43
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 58 publications
(46 citation statements)
references
References 39 publications
3
43
0
Order By: Relevance
“…However, this approximation holds only for large samples and requires of non‐parametric density approximation to the posterior p ( μ i | y i , Ψ ) when the latent variables vector is not Gaussian. Some methods for fast computation of mDIC are advocated in Chan and Grant for several cases where the marginalized likelihood is available analytically. It is shown empirically that the variability of cDIC is importantly larger compared with mDIC, as expected, since the former depends on the latent unknown variables whereas the marginalized criterion integrates them out.…”
Section: Introductionmentioning
confidence: 99%
“…However, this approximation holds only for large samples and requires of non‐parametric density approximation to the posterior p ( μ i | y i , Ψ ) when the latent variables vector is not Gaussian. Some methods for fast computation of mDIC are advocated in Chan and Grant for several cases where the marginalized likelihood is available analytically. It is shown empirically that the variability of cDIC is importantly larger compared with mDIC, as expected, since the former depends on the latent unknown variables whereas the marginalized criterion integrates them out.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, the UC models are always likelier than enforcing an H-P …lter or a deterministic trend on the data. 11 Second, allowing for the large, non-informative priors for the trend and cycle component shocks results in the posterior estimates of the trend shocks being relatively large compared to the cycle shocks (for similar results with US GDP, see Grant and Chan (2017a)).…”
Section: Results From Prior Sensitivity Analysismentioning
confidence: 98%
“…In case 6, the shock originates from the cycle equation, and this has a negative correlation with the trend shock. In equation (11), if, at time t, there is a shock to t of size x t , which causes the cycle to change by ax t , and it also causes t to change by z t and consequently the trend changes by bz t (which has the opposite sign to ax t , given the negative correlation assumed). Consequently, as in case 5, y t = ax t + bz t , which is smaller in absolute value than ax t .…”
Section: Setup For Correlations Between Trend and Cycle Shocksmentioning
confidence: 99%
See 1 more Smart Citation
“…This information criterion is relatively easy to compute when model estimation is based on Bayesian Markov Chain Monte Carlo (MCMC) techniques, like Gibbs Sampling. Nevertheless, the literature shows that DIC estimation is all the more accurate that the number of parameters in the conditioning set of the log-likelihood is limited (Chan and Grant 2016). Here, we compute the conditional log-likelihood from the Kalman filter step of the Gibbs Sampler.…”
Section: * ) ⏟mentioning
confidence: 99%