Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model's ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic
A new field called "decadal prediction" will use initialized climate models to produce time-evolving predictions of regional climate that will bridge ENSO forecasting and future climate change projections.
Regional temperature change projections for the twenty-first century are generated using a multimodel ensemble of atmosphere-ocean general circulation models. The models are assigned coefficients jointly, using a Bayesian linear model fitted to regional observations and simulations of the climate of the twentieth century. Probability models with varying degrees of complexity are explored, and a selection is made based on Bayesian deviance statistics, coefficient properties, and a classical cross-validation measure utilizing temporally averaged data. The model selected is shown to be superior in predictive skill to a naïve model consisting of the unweighted mean of the underlying atmosphere-ocean GCM (AOGCM) simulations, although the skill differential varies regionally. Temperature projections for the A2 and B1 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios are presented.
Discrete-time hidden Markov models are a broadly useful class of latent-variable models with applications in areas such as speech recognition, bioinformatics, and climate data analysis. It is common in practice to introduce temporal non-homogeneity into such models by making the transition probabilities dependent on time-varying exogenous input variables via a multinomial logistic parametrization. We extend such models to introduce additional non-homogeneity into the emission distribution using a generalized linear model (GLM), with data augmentation for sampling-based inference. However, the presence of the logistic function in the state transition model significantly complicates parameter inference for the overall model, particularly in a Bayesian context. To address this we extend the recentlyproposed Polya-Gamma data augmentation approach to handle nonhomogeneous hidden Markov models (NHMMs), allowing the development of an efficient Markov chain Monte Carlo (MCMC) sampling scheme. We apply our model and inference scheme to 30 years of daily rainfall in India, leading to a number of insights into rainfallrelated phenomena in the region. Our proposed approach allows for fully Bayesian analysis of relatively complex NHMMs on a scale that was not possible with previous methods. Software implementing the methods described in the paper is available via the R package NHMM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.