“…For example, Baumeister et al (2017) pointed out that different models outperformed others over different forecast horizons, and they claimed that using simple average of predictions from all models yielded the best forecasts. The simple average method can be treated as the simplest format of ensemble learning, whose popular methods applied include boosting (Freund & Schapire, 1997;Mason, Baxter, Bartlett, & Frean, 2000), bagging (Breiman, 1996), stacking (Smyth & Wolpert, 1999;Clarke, 2003), and Bayesian model average (Hoeting, Madigan, Raftery, & Volinsky, 1999;Amini & Parmeter, 2011). Taking the Bayesian model average (BMA) method for example, while assuming that the frequencies of k proposed models to be selected for the prediction purposes follow a multinomial distribution with parameters p i 's, a priori a conjugate Dirichlet distribution can be assigned to those p i 's, denoted as…”