2010
DOI: 10.1002/cjs.10045
|View full text |Cite
|
Sign up to set email alerts
|

An efficient computational approach for prior sensitivity analysis and cross‐validation

Abstract: Prior sensitivity analysis and cross-validation are important tools in Bayesian statistics. However, due to the computational expense of implementing existing methods, these techniques are rarely used. In this paper, the authors show how it is possible to use sequential Monte Carlo methods to create an efficient and automated algorithm to perform these tasks. They apply the algorithm to the computation of regularization path plots and to assess the sensitivity of the tuning parameter in g-prior model selection… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(34 citation statements)
references
References 41 publications
0
34
0
Order By: Relevance
“…Park and Casella [39] explore this setting, demonstrating that as the shrinkage parameter λ is increased, the Bayesian lasso coefficient estimates tend to zero more slowly than under the original version of the lasso, but that for appropriately chosen penalty parameters the posterior median estimates are very close to those from the original lasso. The Bayesian version of the elastic net uses a combination of double exponential and normal priors to capture the L 1 and L 2 penalty terms of the original elastic net [5, 30]. The Bayesian adaptive lasso avoids over-penalization of large effects through a prior formulation which allows the scale parameter to vary across coefficients [19].…”
Section: Methodsmentioning
confidence: 99%
“…Park and Casella [39] explore this setting, demonstrating that as the shrinkage parameter λ is increased, the Bayesian lasso coefficient estimates tend to zero more slowly than under the original version of the lasso, but that for appropriately chosen penalty parameters the posterior median estimates are very close to those from the original lasso. The Bayesian version of the elastic net uses a combination of double exponential and normal priors to capture the L 1 and L 2 penalty terms of the original elastic net [5, 30]. The Bayesian adaptive lasso avoids over-penalization of large effects through a prior formulation which allows the scale parameter to vary across coefficients [19].…”
Section: Methodsmentioning
confidence: 99%
“…Because of its relation to product partition models (described later), we term this prior the product graphical model prior. To clearly demonstrate the control the product graphical model prior (5) gives relative to the binomial prior, we set b = 1/1000, highly penalizing the number of separators and hence resulting in highly separated cliques. In addition, we look at two different values for a; a = 0.1, resulting in fewer and larger cliques, and a = 10, resulting in more (but smaller) cliques.…”
Section: A New Prior Distribution On Decomposable Graphsmentioning
confidence: 99%
“…Figure 3 shows the 4 graphs with Figure 3. In contrast, the separation of cliques from the product graphical model prior (5) would allow these crops to be planted together. Such decisions could be made from the highest posterior graph, or by conducting Bayesian model averaging to obtain the expected utility of a given decision.…”
Section: Example: Modeling Agricultural Output Of Different Speciesmentioning
confidence: 99%
“…It may be noted that the prior covariance matrix is related to the Fisher information matrix in the linear model. This prior and its variants have been widely used in the literature in linear models; see, for example, Zellner (1986), Chaturvedi et al (1997), Fernández et al (2001), Consonni and Veronese (2008), Krishna et al (2009) and Bornn et al (2010).…”
Section: Introductionmentioning
confidence: 98%