The 2011 International Joint Conference on Neural Networks 2011
DOI: 10.1109/ijcnn.2011.6033566
|View full text |Cite
|
Sign up to set email alerts
|

Turning Bayesian model averaging into Bayesian model combination

Abstract: Bayesian methods are theoretically optimal in many situations. Bayesian model averaging is generally considered the standard model for creating ensembles of learners using Bayesian methods, but this technique is often outperformed by more ad hoc methods in empirical studies. The reason for this failure has important theoretical implications for our understanding of why ensembles work. It has been proposed that Bayesian model averaging struggles in practice because it accounts for uncertainty about which model … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
58
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(65 citation statements)
references
References 6 publications
2
58
0
Order By: Relevance
“…It has been shown to outperform BMA across a wide variety of datasets [see Monteith et al (2011)]. Owing to computational considerations, however, both BMA and BMC are most suited to smaller ensembles.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…It has been shown to outperform BMA across a wide variety of datasets [see Monteith et al (2011)]. Owing to computational considerations, however, both BMA and BMC are most suited to smaller ensembles.…”
Section: Resultsmentioning
confidence: 99%
“…The degree of performance improvement provided by the EP ensemble was found to be dependent on the details of the calibration method employed, particularly with respect to probabilistic skill. For RMSE, the advantage ranged up to several tenths of a degree Fahrenheit, whereas for BSS this advantage decreased from 14.7% for the simplest calibration to 11.2% using Bayesian model combination [BMC; see Monteith et al (2011)]. Further improvement was found by using BMC on a pooled set of ensemble members derived from the EP ensemble and the GFS MOS ensemble.…”
Section: Introductionmentioning
confidence: 99%
“…Here we have contented ourselves with using the simple yet powerful method of Bagging [8] to form ensembles. Other ensembling mechanisms such as Negative Correlation Learning [23], Bayesian Model Combination [24], or pruning of ensembles [25] offer interesting alternatives.…”
Section: Discussionmentioning
confidence: 99%
“…BMC with linear weights (e.g., Monteith et al 2011) is then applied and the selected ensemble members and weights are used to produce the next day's forecast. As in Roebber (2015a,b), we evaluate four possible weights for each ensemble member, resulting in the evaluation of 4 10 or 1 048 576 possible combinations, with normalized weights ranging from 1/37 to 4/13.…”
Section: B Model Selection and Bayesian Model Combination Adaptationmentioning
confidence: 99%