2018
DOI: 10.1007/s00180-018-0801-3
|View full text |Cite
|
Sign up to set email alerts
|

Forecasting seasonal time series data: a Bayesian model averaging approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 53 publications
0
10
0
Order By: Relevance
“…Of course, BMA is not limited to these scenarios and can be applied whenever there is model uncertainty. Other examples of BMA applications include the estimation of effect size (Haldane, 1932), linear regression (Clyde, Ghosh, & Littman, 2011), assessment of the replicability of effects (Iverson, Wagenmakers, & Lee, 2010), prediction in time-series analysis (Vosseler & Weber, 2018), analysis of the causal structure in a brain network (Penny et al, 2010), structural equation modeling (Kaplan & Lee, 2016), factor analysis (Dunson, 2006), and correcting for publication bias using the precision-effect test and precision-effect estimate with standard errors (Carter & McCullough, 2018). In general, BMA reduces overconfidence, results in optimal predictions (under mild conditions), avoids threshold-based all-or-nothing decision making, and is relatively robust against model misspecification.…”
Section: Concluding Commentsmentioning
confidence: 99%
“…Of course, BMA is not limited to these scenarios and can be applied whenever there is model uncertainty. Other examples of BMA applications include the estimation of effect size (Haldane, 1932), linear regression (Clyde, Ghosh, & Littman, 2011), assessment of the replicability of effects (Iverson, Wagenmakers, & Lee, 2010), prediction in time-series analysis (Vosseler & Weber, 2018), analysis of the causal structure in a brain network (Penny et al, 2010), structural equation modeling (Kaplan & Lee, 2016), factor analysis (Dunson, 2006), and correcting for publication bias using the precision-effect test and precision-effect estimate with standard errors (Carter & McCullough, 2018). In general, BMA reduces overconfidence, results in optimal predictions (under mild conditions), avoids threshold-based all-or-nothing decision making, and is relatively robust against model misspecification.…”
Section: Concluding Commentsmentioning
confidence: 99%
“…Of course, BMA is not limited to these scenarios and can be applied whenever there is model uncertainty. Other examples of other BMA applications include the estimation of effect size (Haldane, 1932), linear regression (Clyde et al, 2011), replicability of effects (Iverson, Wagenmakers, & Lee, 2010), prediction in time series analysis (Vosseler & Weber, 2018), analysis of the causal structure between brain regions (Penny et al, 2010), structural equation modelling (Kaplan & Lee, 2016), factor analysis (Dunson et al, 2006) and the PET-PEESE decision in correcting for publication bias (Carter & McCullough, 2018). In general, BMA reduces overconfidence, results in optimal predictions (under mild conditions), avoids threshold-based all-or-nothing decision making and is relatively robust against model-misspecification.…”
Section: Concluding Commentsmentioning
confidence: 99%
“…Of course, BMA is not limited to these scenarios and can be applied whenever there is model uncertainty. Other examples of BMA applications include the estimation of effect size (Haldane, 1932), linear regression (Clyde et al, 2011), replicability of effects (Iverson, Wagenmakers, & Lee, 2010), prediction in time series analysis (Vosseler & Weber, 2018), analysis of the causal structure between brain regions (Penny et al, 2010), structural equation modelling (Kaplan & Lee, 2016), factor analysis (Dunson et al, 2006) and the PET-PEESE decision in correcting for publication bias (Carter & McCullough, 2018). In general, BMA reduces overconfidence, results in optimal predictions (under mild conditions), avoids threshold-based allor-nothing decision making and is relatively robust against model-misspecification.…”
Section: Concluding Commentsmentioning
confidence: 99%