In this chapter we discuss the basic notions about state space models and their use in time series analysis. The dynamic linear model is presented as a special case of a general state space model, being linear and Gaussian. For dynamic linear models, estimation and forecasting can be obtained recursively by the well-known Kalman filter. IntroductionIn recent years there has been an increasing interest in the application of state space models in time series analysis; see, for example, Harvey (1989), West and Harrison (1997), Durbin and Koopman (2001), the recent overviews by Künsch (2001) and Migon et al. (2005), and the references therein. State space models consider a time series as the output of a dynamic system perturbed by random disturbances. They allow a natural interpretation of a time series as the combination of several components, such as trend, seasonal or regressive components. At the same time, they have an elegant and powerful probabilistic structure, offering a flexible framework for a very wide range of applications. Computations can be implemented by recursive algorithms. The problems of estimation and forecasting are solved by recursively computing the conditional distribution of the quantities of interest, given the available information. In this sense, they are quite naturally treated within a Bayesian framework.State space models can be used to model univariate or multivariate time series, also in the presence of non-stationarity, structural changes, and irregular patterns. In order to develop a feeling for the possible applications of state space models in time series analysis, consider for example the data plotted in Figure 2.1. This time series appears fairly predictable, since it repeats quite regularly its behavior over time: we see a trend and a rather regular seasonal component, with a slightly increasing variability. For data of this kind, we would probably be happy with a fairly simple time series model, with a trend
Random Bernstein polynomials which are also probability distribution functions on the closed unit interval are studied. The probability law of a Bernstein polynomial so de®ned provides a novel prior on the space of distribution functions on [0, 1], which has full support and can easily select absolutely continuous distribution functions with a continuous and smooth derivative. In particular, the Bernstein polynomial which approximates a Dirichlet process is studied. This may be of interest in Bayesian non-parametric inference. In the second part of the paper, we study the posterior from a``Bernstein±Dirichlet'' prior and suggest a hybrid Monte Carlo approximation of it. The proposed algorithm has some aspects of novelty since the problem under examination has a``changing dimension'' parameter space.
A Bernstein prior is a probability measure on the space of all the distribution functions on [0, 1]. Under very general assumptions, it selects absolutely continuous distribution functions, whose densities are mixtures of known beta densities. The Bernstein prior is of interest in Bayesian nonparametric inference with continuous data. We study the consistency of the posterior from a Bernstein prior. We ®rst show that, under mild assumptions, the posterior is weakly consistent for any distribution function P 0 on [0, 1] with continuous and bounded Lebesgue density. With slightly stronger assumptions on the prior, the posterior is also Hellinger consistent. This implies that the predictive density from a Bernstein prior, which is a Bayesian density estimate, converges in the Hellinger sense to the true density (assuming that it is continuous and bounded). We also study a sieve maximum likelihood version of the density estimator and show that it is also Hellinger consistent under weak assumptions. When the order of the Bernstein polynomial, i.e. the number of components in the beta distribution mixture, is truncated, we show that under mild restrictions the posterior concentrates on the set of pseudotrue densities. Finally, we study the behaviour of the predictive density numerically and we also study a hybrid Bayes±maximum likelihood density estimator.
We propose a Bayesian nonparametric procedure for density estimation, for data in a closed, bounded interval, say [0,1]. To this aim, we use a prior based on Bemstein polynomials. This corresponds to expressing the density of the data as a mixture of given beta densities, with random weights and a random number of components. The density estimate is then obtained as the corresponding predictive density function. Comparison with classical and Bayesian kernel estimates is provided. The proposed procedure is illustrated in an example; an MCMC algorithm for approximating the estimate is also discussed.
Bayesian inference is attractive for its coherence and good frequentist properties. However, it is a common experience that eliciting a honest prior may be difficult and, in practice, people often take an empirical Bayes approach, plugging empirical estimates of the prior hyperparameters into the posterior distribution. Even if not rigorously justified, the underlying idea is that, when the sample size is large, empirical Bayes leads to "similar" inferential answers. Yet, precise mathematical results seem to be missing. In this work, we give a more rigorous justification in terms of merging of Bayes and empirical Bayes posterior distributions. We consider two notions of merging: Bayesian weak merging and frequentist merging in total variation. Since weak merging is related to consistency, we provide sufficient conditions for consistency of empirical Bayes posteriors. Also, we show that, under regularity conditions, the empirical Bayes procedure asymptotically selects the value of the hyperparameter for which the prior mostly favors the "truth". Examples include empirical Bayes density estimation with Dirichlet process mixtures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.