Bayesian Theory and Applications 2013
DOI: 10.1093/acprof:oso/9780199695607.003.0020
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Model Specification: Heuristics And Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
13
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 10 publications
0
13
0
Order By: Relevance
“…Leamer [1978] is a notable early effort advocating ad-hoc model selection for the purpose of human comprehensibility. Fouskakis and Draper [2008], Fouskakis et al [2009] and Draper [2013] represent efforts to define variable importance in real-world terms using subject matter considerations. A more generic approach is to gauge predictive relevance [Gelfand et al, 1992].…”
mentioning
confidence: 99%
“…Leamer [1978] is a notable early effort advocating ad-hoc model selection for the purpose of human comprehensibility. Fouskakis and Draper [2008], Fouskakis et al [2009] and Draper [2013] represent efforts to define variable importance in real-world terms using subject matter considerations. A more generic approach is to gauge predictive relevance [Gelfand et al, 1992].…”
mentioning
confidence: 99%
“…Bernardo and Smith ()), posterior model probabilities correspond to a utility function—often somewhat strange in practice—in which the decision maker receives 1 utile for choosing the correct model in a finite set {scriptMi,iI} of candidates and 0 utiles otherwise. Other, arguably more natural, utility functions lead to other model comparison strategies; a leading example is log‐scoring , arising from a utility function that rewards predictive accuracy (see, for example, Gelfand and Ghosh () and Draper () for details). As the authors emphasize, the asymptotics in this paper are based on a scenario in which the set {scriptMi,iI} of candidate models is held constant and the sample size n is permitted to increase without bound. But surely this is unrealistic for actual practice: as n grows, a good statistician will typically enlarge {scriptMi,iI} by enriching it with more complicated models, to do a better job of emulating the complexity of the real world.…”
Section: Discussion On the Paper By Drton And Plummermentioning
confidence: 99%
“…Thus, a good model would be one that provides a large value of p(z new |z, ⌰). Proper rules for comparing a data value z new with its predictive distribution involve the logarithm of the height of p(z new |z, ⌰), or log(p(z new |z, ⌰)) (Gneiting and Raftery 2007;Krnjajić et al 2008;Draper and Krnjajić 2010;Draper 2013). This metric of predictive quality is known as the log-score (LS).…”
Section: Model Selection For the Habitat Modelmentioning
confidence: 99%
“…This suggests we would need to run a number of MCMC models for each covariate and each run would have a different set of data points excluded from model estimation (e.g., Draper and Krnjajić 2010;Shelton et al 2012;Draper 2013). In practice, this is impractical due to the long computing times for models estimated with MCMC.…”
Section: Model Selection For the Habitat Modelmentioning
confidence: 99%
See 1 more Smart Citation