[1] Objective measures of climate model performance are proposed and used to assess simulations of the 20th century, which are available from the Coupled Model Intercomparison Project (CMIP3) archive. The primary focus of this analysis is on the climatology of atmospheric fields. For each variable considered, the models are ranked according to a measure of relative error. Based on an average of the relative errors over all fields considered, some models appear to perform substantially better than others. Forming a single index of model performance, however, can be misleading in that it hides a more complex picture of the relative merits of different models. This is demonstrated by examining individual variables and showing that the relative ranking of models varies considerably from one variable to the next. A remarkable exception to this finding is that the so-called ''mean model'' consistently outperforms all other models in nearly every respect. The usefulness, limitations and robustness of the metrics defined here are evaluated 1) by examining whether the information provided by each metric is correlated in any way with the others, and 2) by determining how sensitive the metrics are to such factors as observational uncertainty, spatial scale, and the domain considered (e.g., tropics versus extra-tropics). An index that gauges the fidelity of model variability on interannual time-scales is found to be only weakly correlated with an index of the mean climate performance. This illustrates the importance of evaluating a broad spectrum of climate processes and phenomena since accurate simulation of one aspect of climate does not guarantee accurate representation of other aspects. Once a broad suite of metrics has been developed to characterize model performance it may become possible to identify optimal subsets for various applications.
Changes in the climate system's energy budget are predominantly revealed in ocean temperatures and the associated thermal expansion contribution to sea-level rise. Climate models, however, do not reproduce the large decadal variability in globally averaged ocean heat content inferred from the sparse observational database, even when volcanic and other variable climate forcings are included. The sum of the observed contributions has also not adequately explained the overall multi-decadal rise. Here we report improved estimates of near-global ocean heat content and thermal expansion for the upper 300 m and 700 m of the ocean for 1950-2003, using statistical techniques that allow for sparse data coverage and applying recent corrections to reduce systematic biases in the most common ocean temperature observations. Our ocean warming and thermal expansion trends for 1961-2003 are about 50 per cent larger than earlier estimates but about 40 per cent smaller for 1993-2003, which is consistent with the recognition that previously estimated rates for the 1990s had a positive bias as a result of instrumental errors. On average, the decadal variability of the climate models with volcanic forcing now agrees approximately with the observations, but the modelled multi-decadal trends are smaller than observed. We add our observational estimate of upper-ocean thermal expansion to other contributions to sea-level rise and find that the sum of contributions from 1961 to 2003 is about 1.5 +/- 0.4 mm yr(-1), in good agreement with our updated estimate of near-global mean sea-level rise (using techniques established in earlier studies) of 1.6 +/- 0.2 mm yr(-1).
The Atmospheric Model Intercomparison Project (AMIP), initiated in 1989 under the auspices of the World Climate Research Programme, undertook the systematic validation, diagnosis, and intercomparison of the performance of atmospheric general circulation models. For this purpose all models were required to simulate the evolution of the climate during the decade 1979-88, subject to the observed monthly average temperature and sea ice and a common prescribed atmospheric C02 concentration and solar constant. By 1995, 31 modeling groups, representing virtually the entire international atmospheric modeling community, had contributed the required standard output of the monthly means of selected statistics. These data have been analyzed by the participating modeling groups, by the Program for Climate Model Diagnosis and Intercomparison, and by the more than two dozen AMIP diagnostic subprojects that have been established to examine specific aspects of the models' performance. Here the analysis and validation of the AMIP results as a whole are summarized in order to document the overall performance of atmospheric general circulation-climate models as of the early 1990s. The infrastructure and plans for continuation of the AMIP project are also reported on.Although there are apparent model outliers in each simulated variable examined, validation of the AMIP models' ensemble mean shows that the average large-scale seasonal distributions of pressure, temperature, and circulation are reasonably close to what are believed to be the best observational estimates available. The large-scale structure of the ensemble mean precipitation and ocean surface heat flux also resemble the observed estimates but show particularly large intermodel differences in low latitudes. The total cloudiness, on the other hand, is rather poorly simulated, especially in the Southern Hemisphere. The models' simulation of the seasonal cycle (as represented by the amplitude and phase of the first annual harmonic of sea level pressure) closely resembles the observed variation in almost all regions. The ensemble's simulation of the interannual variability of sea level pressure in the tropical Pacific is reasonably close to that observed (except for its underestimate of the amplitude of major El Ninos), while the interannual variability is less well simulated in midlatitudes. When analyzed in terms of the variability of the evolution of their combined spacetime patterns in comparison to observations, the AMIP models are seen to exhibit a wide range of accuracy, with no single model performing best in all respects considered.Analysis of the subset of the original AMIP models for which revised versions have subsequently been used to revisit the experiment shows a substantial reduction of the models' systematic errors in simulating cloudiness but only a slight reduction of the mean seasonal errors of most other variables. In order to understand better the nature of these errors and to accelerate the rate of model improvement, an expanded and continuing project (...
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.anthropogenic forcing ͉ detection and attribution ͉ regional modeling W ork for the Intergovernmental Panel on Climate Change (IPCC) fourth assessment report (AR4) has produced global climate model data from groups around the world. These data have been collected in the CMIP3 dataset (1), which is archived at the Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory (LLNL). The CMIP3 data are increasingly being downscaled and used to address regional and local issues in water management, agriculture, wildfire mitigation, and ecosystem change. A problem such studies face is how to select the global models to use in the regional studies (2-4). What effect does picking different global models have on the regional climate study results? If different global models give different downscaled results, what strategy should be used for selecting the global models? Are there overall strategies that can be used to guide the choice of models? As more researchers begin using climate models for regional applications, these questions become ever more important.The present paper and accompanying work investigate these questions. Here we address the regional problem, using as a demonstration case a recent detection and attribution (D&A) study of changes in the hydrological cycle of the western United States (B08 hereafter) (5-8). The insights we ...
Earth system models are complex and represent a large number of processes, resulting in a persistent spread across climate projections for a given future scenario. Owing to different model performances against observations and the lack of independence among models, there is now evidence that giving equal weight to each available model projection is suboptimal. This Perspective discusses newly developed tools that facilitate a more rapid and comprehensive evaluation of model simulations with observations, process-based emergent constraints that are a promising way to focus evaluation on the observations most relevant to climate projections, and advanced methods for model weighting. These approaches are needed to distil the most credible information on regional climate changes, impacts, and risks for stakeholders and policy-makers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.