2010
DOI: 10.1016/j.stamet.2009.04.002
|View full text |Cite
|
Sign up to set email alerts
|

The mode oriented stochastic search (MOSS) algorithm for log-linear models with conjugate priors

Abstract: Summary. We describe a novel stochastic search algorithm for rapidly identifying regions of high posterior probability in the space of decomposable, graphical and hierarchical log-linear models. Our approach is based on the conjugate priors for log-linear parameters introduced in Massam et al. (2008). We discuss the computation of Bayes factors through Laplace approximations and the Bayesian iterate proportional fitting algorithm for sampling model parameters. We also present a clustering algorithm for discret… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(29 citation statements)
references
References 31 publications
0
29
0
Order By: Relevance
“…Although graphical models are popular due to their flexibility and interpretability, computation is daunting since the size of the model space grows exponentially with p . Even with highly efficient search algorithms (Jones et al 2005; Carvalho and Scott 2009; Dobra and Massam 2010; Lenkoski and Dobra 2011, among others), it is only feasible to visit a tiny subset of the model space even for moderate p . Accurate model selection in this context is difficult when p is moderate to large and the number of samples is not enormous because, in such cases, even the highest posterior probability models receive very small weight and there will typically be a large number of models having essentially identical performance according to any given model selection criteria (Akaike information criterion [AIC], Bayesian information criterion [BIC], etc).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although graphical models are popular due to their flexibility and interpretability, computation is daunting since the size of the model space grows exponentially with p . Even with highly efficient search algorithms (Jones et al 2005; Carvalho and Scott 2009; Dobra and Massam 2010; Lenkoski and Dobra 2011, among others), it is only feasible to visit a tiny subset of the model space even for moderate p . Accurate model selection in this context is difficult when p is moderate to large and the number of samples is not enormous because, in such cases, even the highest posterior probability models receive very small weight and there will typically be a large number of models having essentially identical performance according to any given model selection criteria (Akaike information criterion [AIC], Bayesian information criterion [BIC], etc).…”
Section: Introductionmentioning
confidence: 99%
“…Posterior model search in log-linear models using traditional Markov chain Monte Carlo (MCMC) methods tends to bog down quickly as dimensionality increases. Dobra and Massam (2010) proposed a mode-oriented stochastic search method to more efficiently explore high posterior probability regions in decomposable, graphical, and hierarchical log-linear models.…”
Section: Introductionmentioning
confidence: 99%
“…Selection of hierarchical loglinear models has been widely discussed in the statistical literature [21,19,1,57]. More recent approaches that work well for high-dimensional sparse contingency tables involve Bayesian Markov chain Monte Carlo (MCMC) algorithms [39,40,41,13,51,14,16,15,17].…”
Section: Approachmentioning
confidence: 99%
“…For example, a full model for the .2 × 3 × 2 × 8 × 12/way contingency table data that we consider later in this section requires a 1151-dimensional parameter. One Bayesian approach to the analysis of such data is via model selection between reduced log-linear models (Dawid and Lauritzen, 1993;Dobra and Massam, 2010). However, model selection can be difficult even for moderate numbers of variables and categories, owing to the large number of models with low posterior probability and the resulting difficulty in completely exploring the model space.…”
Section: Marginally Specified Priors For Contingency Table Datamentioning
confidence: 99%