This article describes the use of Bayesian methods in the statistical analysis of time series. The use of Markov chain Monte Carlo methods has made even the more complex time series models amenable to Bayesian analysis. Models discussed in some detail are ARIMA models and their fractionally integrated counterparts, state-space models, Markov switching and mixture models, and models allowing for timevarying volatility. A final section reviews some recent approaches to nonparametric Bayesian modelling of time series.
Bayesian methodsThe importance of Bayesian methods in econometrics has increased rapidly over the last decade. This is, no doubt, fuelled by an increasing appreciation of the advantages that Bayesian inference entails. In particular, it provides us with a formal way to incorporate the prior information we often possess before seeing the data, it fits perfectly with sequential learning and decision making and it directly leads to exact small sample results. In addition, the Bayesian paradigm is particularly natural for prediction, taking into account all parameter or even model uncertainty. The predictive distribution is the sampling distribution where the parameters are integrated out with the posterior distribution and is exactly what we need for forecasting, often a key goal of time-series analysis.Usually, the choice of a particular econometric model is not prespecified by theory and many competing models can be entertained. Comparing models can be done formally in a Bayesian framework through so-called posterior odds, which is the product of the prior odds and the Bayes factor. The Bayes factor between any two models is the ratio of the likelihoods integrated out with the corresponding prior and summarizes how the data favour one model over another. Given a set of possible models, this immediately leads to posterior model probabilities. Rather than choosing a single model, a natural way to deal with model uncertainty is to use the posterior model probabilities to average out the inference