Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added.Published by Copernicus Publications on behalf of the European Geosciences Union. V. K. C. Venema et al.: Benchmarking monthly homogenization algorithmsParticipants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, stateof-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.
Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-theart relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Instrumental meteorological measurements from periods prior to the start of national weather services are designated “early instrumental data.” They have played an important role in climate research as they allow daily to decadal variability and changes of temperature, pressure, and precipitation, including extremes, to be addressed. Early instrumental data can also help place twenty-first century climatic changes into a historical context such as defining preindustrial climate and its variability. Until recently, the focus was on long, high-quality series, while the large number of shorter series (which together also cover long periods) received little to no attention. The shift in climate and climate impact research from mean climate characteristics toward weather variability and extremes, as well as the success of historical reanalyses that make use of short series, generates a need for locating and exploring further early instrumental measurements. However, information on early instrumental series has never been electronically compiled on a global scale. Here we attempt a worldwide compilation of metadata on early instrumental meteorological records prior to 1850 (1890 for Africa and the Arctic). Our global inventory comprises information on several thousand records, about half of which have not yet been digitized (not even as monthly means), and only approximately 20% of which have made it to global repositories. The inventory will help to prioritize data rescue efforts and can be used to analyze the potential feasibility of historical weather data products. The inventory will be maintained as a living document and is a first, critical, step toward the systematic rescue and reevaluation of these highly valuable early records. Additions to the inventory are welcome.
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. <br><br> Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones
Various types of studies require a sufficiently long series of data processed identically over the entire area. For climate analysis, it is necessary that analysed time series are homogeneous, which means that their variations are caused only by variations in weather and climate. Unfortunately, most of the climatological series are inhomogeneous and contain outliers that may significantly affect the analysis results. The 137 stations with precipitation measurement belonging to the meteorological station network governed by the Meteorological and Hydrological Service of Croatia were selected for the present analysis. Most of the data series cover a period from the late 1940s or early 1950s through the year 2010. For quality control and homogenization, an approach based on the software ProClimDB/Anclim was applied. In this study, we describe the results from the quality control and homogenization process for monthly precipitation sums as well as the spatial relationship of precipitation in the Croatian region. The precipitation network in Croatia is fairly homogeneous as only 23% of the 137 analysed stations are found to be inhomogeneous.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.