Recent coordinated efforts, in which numerous general circulation climate models have been run for a common set of experiments, have produced large datasets of projections of future climate for various scenarios. Those multimodel ensembles sample initial conditions, parameters, and structural uncertainties in the model design, and they have prompted a variety of approaches to quantifying uncertainty in future climate change. International climate change assessments also rely heavily on these models. These assessments often provide equal-weighted averages as best-guess results, assuming that individual model biases will at least partly cancel and that a model average prediction is more likely to be correct than a prediction from a single model based on the result that a multimodel average of present-day climate generally outperforms any individual model. This study outlines the motivation for using multimodel ensembles and discusses various challenges in interpreting them. Among these challenges are that the number of models in these ensembles is usually small, their distribution in the model or parameter space is unclear, and that extreme behavior is often not sampled. Model skill in simulating present-day climate conditions is shown to relate only weakly to the magnitude of predicted change. It is thus unclear by how much the confidence in future projections should increase based on improvements in simulating present-day conditions, a reduction of intermodel spread, or a larger number of models. Averaging model output may further lead to a loss of signal—for example, for precipitation change where the predicted changes are spatially heterogeneous, such that the true expected change is very likely to be larger than suggested by a model average. Last, there is little agreement on metrics to separate “good” and “bad” models, and there is concern that model development, evaluation, and posterior weighting or ranking are all using the same datasets. While the multimodel average appears to still be useful in some situations, these results show that more quantitative methods to evaluate model performance are critical to maximize the value of climate change projections from global models.
Interpolation of a spatially correlated random process is used in many areas. The best unbiased linear predictor, often called kriging predictor in geostatistical science, requires the solution of a large linear system based on the covariance matrix of the observations. In this article, we show that tapering the correct covariance matrix with an appropriate compactly supported covariance function reduces the computational burden significantly and still has an asymptotic optimal mean squared error. The effect of tapering is to create a sparse approximate linear system that can then be solved using sparse matrix algorithms. Extensive Monte Carlo simulations support the theoretical results. An application to a large climatological precipitation dataset is presented as a concrete practical illustration.
This work studies the effects of sampling variability in Monte Carlo-based methods to estimate very highdimensional systems. Recent focus in the geosciences has been on representing the atmospheric state using a probability density function, and, for extremely high-dimensional systems, various sample-based Kalman filter techniques have been developed to address the problem of real-time assimilation of system information and observations. As the employed sample sizes are typically several orders of magnitude smaller than the system dimension, such sampling techniques inevitably induce considerable variability into the state estimate, primarily through prior and posterior sample covariance matrices. In this article, we quantify this variability with mean squared error measures for two Monte Carlo-based Kalman filter variants: the ensemble Kalman filter and the ensemble square-root Kalman filter. Expressions of the error measures are derived under weak assumptions and show that sample sizes need to grow proportionally to the square of the system dimension for bounded error growth. To reduce necessary ensemble size requirements and to address rank-deficient sample covariances, covariance-shrinking (tapering) based on the Schur product of the prior sample covariance and a positive definite function is demonstrated to be a simple, computationally feasible, and very effective technique. Rules for obtaining optimal taper functions for both stationary as well as non-stationary covariances are given, and optimal taper lengths are given in terms of the ensemble size and practical range of the forecast covariance. Results are also presented for optimal covariance inflation. The theory is verified and illustrated with extensive simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.