VALUE is an open European collaboration to intercompare downscaling approaches for climate change research, focusing on different validation aspects (marginal, temporal, extremes, spatial, process‐based, etc.). Here we describe the participating methods and first results from the first experiment, using “perfect” reanalysis (and reanalysis‐driven regional climate model (RCM)) predictors to assess the intrinsic performance of the methods for downscaling precipitation and temperatures over a set of 86 stations representative of the main climatic regions in Europe. This study constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods, covering the three common downscaling approaches (perfect prognosis, model output statistics—including bias correction—and weather generators) with a total of over 50 downscaling methods representative of the most common techniques. Overall, most of the downscaling methods greatly improve (reanalysis or RCM) raw model biases and no approach or technique seems to be superior in general, because there is a large method‐to‐method variability. The main factors most influencing the results are the seasonal calibration of the methods (e.g., using a moving window) and their stochastic nature. The particular predictors used also play an important role in cases where the comparison was possible, both for the validation results and for the strength of the predictor–predictand link, indicating the local variability explained. However, the present study cannot give a conclusive assessment of the skill of the methods to simulate regional future climates, and further experiments will be soon performed in the framework of the EURO‐CORDEX initiative (where VALUE activities have merged and follow on). Finally, research transparency and reproducibility has been a major concern and substantive steps have been taken. In particular, the necessary data to run the experiments are provided at http://www.value-cost.eu/data and data and validation results are available from the VALUE validation portal for further investigation: http://www.value-cost.eu/validationportal.
Abstract. Several sets of reference regions have been used in the literature for the regional synthesis of observed and modelled climate and climate change information. A popular example is the series of reference regions used in the Intergovernmental Panel on Climate Change (IPCC) Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Adaptation (SREX). The SREX regions were slightly modified for the Fifth Assessment Report of the IPCC and used for reporting subcontinental observed and projected changes over a reduced number (33) of climatologically consistent regions encompassing a representative number of grid boxes. These regions are intended to allow analysis of atmospheric data over broad land or ocean regions and have been used as the basis for several popular spatially aggregated datasets, such as the Seasonal Mean Temperature and Precipitation in IPCC Regions for CMIP5 dataset. We present an updated version of the reference regions for the analysis of new observed and simulated datasets (including CMIP6) which offer an opportunity for refinement due to the higher atmospheric model resolution. As a result, the number of land and ocean regions is increased to 46 and 15, respectively, better representing consistent regional climate features. The paper describes the rationale for the definition of the new regions and analyses their homogeneity. The regions are defined as polygons and are provided as coordinates and a shapefile together with companion R and Python notebooks to illustrate their use in practical problems (e.g. calculating regional averages). We also describe the generation of a new dataset with monthly temperature and precipitation, spatially aggregated in the new regions, currently for CMIP5 and CMIP6, to be extended to other datasets in the future (including observations). The use of these reference regions, dataset and code is illustrated through a worked example using scatter plots to offer guidance on the likely range of future climate change at the scale of the reference regions. The regions, datasets and code (R and Python notebooks) are freely available at the ATLAS GitHub repository: https://github.com/SantanderMetGroup/ATLAS (last access: 24 August 2020), https://doi.org/10.5281/zenodo.3998463 (Iturbide et al., 2020).
The performance of Statistical Downscaling (SD) techniques is critically re-assessed with respect to their robust applicability in climate change studies. To this aim, in addition to standard accuracy measures and distributional similarity scores, we estimate the robustness of the methods under warming climate conditions working with anomalous warm historical periods. This validation framework is applied to intercompare the performance of twelve different SD methods (from the analogs, weather typing and regression families) for downscaling minimum and maximum temperatures in Spain. First, we perform a calibration of these methods in terms of both geographical domains and predictor sets; the results are highly dependent on the latter, with optimum predictor sets including information of near-surface temperature (in particular 2 meters temperature), which discriminate appropriately cold episodes related to temperature inversion in the lower troposphere. Although regression methods perform best in terms of correlation, analog and weather generator approaches are more appropriate for reproducing the observed distributions, especially in case of wintertime minimum temperature. However, the latter two families significantly underestimate the temperature anomalies of the warm periods considered in this work. This underestimation is found to be critical when considering the warming signal in the late 21st century as given by a Global Climate Model (the ECHAM5-MPI model). In this case, the different downscaling methods provide warming values with differences in a range of 1 degC, in agreement with the robustness significance values. Therefore, the proposed test for robustness is a promising technique for detecting lack of robustness in statistical downscaling methods for climate change projections.
Abstract. Deep learning techniques (in particular convolutional neural networks, CNNs) have recently emerged as a promising approach for statistical downscaling due to their ability to learn spatial features from huge spatiotemporal datasets. However, existing studies are based on complex models, applied to particular case studies and using simple validation frameworks, which makes a proper assessment of the (possible) added value offered by these techniques difficult. As a result, these models are usually seen as black boxes, generating distrust among the climate community, particularly in climate change applications. In this paper we undertake a comprehensive assessment of deep learning techniques for continental-scale statistical downscaling, building on the VALUE validation framework. In particular, different CNN models of increasing complexity are applied to downscale temperature and precipitation over Europe, comparing them with a few standard benchmark methods from VALUE (linear and generalized linear models) which have been traditionally used for this purpose. Besides analyzing the adequacy of different components and topologies, we also focus on their extrapolation capability, a critical point for their potential application in climate change studies. To do this, we use a warm test period as a surrogate for possible future climate conditions. Our results show that, while the added value of CNNs is mostly limited to the reproduction of extremes for temperature, these techniques do outperform the classic ones in the case of precipitation for most aspects considered. This overall good performance, together with the fact that they can be suitably applied to large regions (e.g., continents) without worrying about the spatial features being considered as predictors, can foster the use of statistical approaches in international initiatives such as Coordinated Regional Climate Downscaling Experiment (CORDEX).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.