2013
DOI: 10.1080/10618600.2012.694756
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Monte Carlo for Bayesian Variable Selection in Regression Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
40
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(42 citation statements)
references
References 26 publications
2
40
0
Order By: Relevance
“…For each metric from IMUs or hdEMG, the algorithm built any possible subset of the 281 structural features. For each subset, a Monte Carlo simulation algorithm was applied to find the optimum coefficients of the linear model (Lamnisos, Griffin, & Steel, ). Variables were added to the model under a criterion of prevention of overfitting, that is, assessing each new model update on a subset of the total sample.…”
Section: Methodsmentioning
confidence: 99%
“…For each metric from IMUs or hdEMG, the algorithm built any possible subset of the 281 structural features. For each subset, a Monte Carlo simulation algorithm was applied to find the optimum coefficients of the linear model (Lamnisos, Griffin, & Steel, ). Variables were added to the model under a criterion of prevention of overfitting, that is, assessing each new model update on a subset of the total sample.…”
Section: Methodsmentioning
confidence: 99%
“…Unlike model selection methods, it also does not assign binary zero-one weights to the OLS coefficients. Other approaches that apply flexible weighting to individual predictors include bagging (Breiman, 1996) which applies differential shrinkage weights to each coefficient, the adaptive Lasso (Zou, 2006) which applies variable-specific weights to the individual predictors in a datadependent adaptive manner, the Elastic Net (Zou and Hastie, 2005;Zou and Zhang, 2009) which introduces extra parameters to control the penalty for inclusion of additional variables, and Bayesian methods such as adaptive Monte Carlo (Lamnisos et al, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, τ 1 and τ 2 are chosen to ensure that the acceptance rates for each move type are between 20% and 30%. Alternately, we could have used an adaptive MH algorithm, as in Lamnisos et al [37]. However, the standard MH algorithm that we specify has good convergence properties and is easy to implement.…”
Section: Bayesian Variable Selection For Paired Case-control Datamentioning
confidence: 99%