[1] Precipitation downscaling improves the coarse resolution and poor representation of precipitation in global climate models and helps end users to assess the likely hydrological impacts of climate change. This paper integrates perspectives from meteorologists, climatologists, statisticians, and hydrologists to identify generic end user (in particular, impact modeler) needs and to discuss downscaling capabilities and gaps. End users need a reliable representation of precipitation intensities and temporal and spatial variability, as well as physical consistency, independent of region and season. In addition to presenting dynamical downscaling, we review perfect prognosis statistical downscaling, model output statistics, and weather generators, focusing on recent developments to improve the representation of spacetime variability. Furthermore, evaluation techniques to assess downscaling skill are presented. Downscaling adds considerable value to projections from global climate models. Remaining gaps are uncertainties arising from sparse data; representation of extreme summer precipitation, subdaily precipitation, and full precipitation fields on fine scales; capturing changes in small-scale processes and their feedback on large scales; and errors inherited from the driving global climate model.
Biases in climate model simulations introduce biases in subsequent impact simulations. Therefore, bias correction methods are operationally used to post-process regional climate projections. However, many problems have been identified, and some researchers question the very basis of the approach. Here we demonstrate that a typical cross-validation is unable to identify improper use of bias correction. Several examples show the limited ability of bias correction to correct and to downscale variability, and demonstrate that bias correction can cause implausible climate change signals. Bias correction cannot overcome major model errors, and naive application might result in ill-informed adaptation decisions. We conclude with a list of recommendations and suggestions for future research to reduce, post-process, and cope with climate model biases
Quantile mapping is routinely applied to correct biases of regional climate model simulations compared to observational data. If the observations are of similar resolution as the regional climate model, quantile mapping is a feasible approach. However, if the observations are of much higher resolution, quantile mapping also attempts to bridge this scale mismatch. Here, it is shown for daily precipitation that such quantile mapping-based downscaling is not feasible but introduces similar problems as inflation of perfect prognosis (''prog'') downscaling: the spatial and temporal structure of the corrected time series is misrepresented, the drizzle effect for area means is overcorrected, area-mean extremes are overestimated, and trends are affected.To overcome these problems, stochastic bias correction is required.
Climate models are our major source of knowledge about climate change. The impacts of climate change are often quantified by impact models. Whereas impact models typically require high resolution unbiased input data, global and regional climate models are in general biased, their resolution is often lower than desired. Thus, many users of climate model data apply some form of bias correction and downscaling. A fundamental assumption of bias correction is that the considered climate model produces skillful input for a bias correction, including a plausible representation of climate change. Current bias correction methods cannot plausibly correct climate change trends, and have limited ability to downscale. Cross validation of marginal aspects is not sufficient to evaluate bias correction and needs to be complemented by further analyses. Future research should address the development of stochastic models for downscaling and approaches to explicitly incorporate process understanding. Keywords Regional climate modelling • Bias correction • Downscaling • Statistical post-processing • Model output statistics This article is part of the Topical Collection on Advances in Modeling
Abstract. In this paper, we present a detailed evaluation of cross wavelet analysis of bivariate time series. We develop a statistical test for zero wavelet coherency based on Monte Carlo simulations. If at least one of the two processes considered is Gaussian white noise, an approximative formula for the critical value can be utilized. In a second part, typical pitfalls of wavelet cross spectra and wavelet coherency are discussed. The wavelet cross spectrum appears to be not suitable for significance testing the interrelation between two processes. Instead, one should rather apply wavelet coherency. Furthermore we investigate problems due to multiple testing. Based on these results, we show that coherency between ENSO and NAO is an artefact for most of the time from 1900 to 1995. However, during a distinct period from around 1920 to 1940, significant coherency between the two phenomena occurs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.