Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added.Published by Copernicus Publications on behalf of the European Geosciences Union. V. K. C. Venema et al.: Benchmarking monthly homogenization algorithmsParticipants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, stateof-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.
This paper presents the Integrated Nowcasting through Comprehensive Analysis (INCA) system, which has been developed for use in mountainous terrain. Analysis and nowcasting fields include temperature, humidity, wind, precipitation amount, precipitation type, cloudiness, and global radiation. The analysis part of the system combines surface station data with remote sensing data in such a way that the observations at the station locations are reproduced, whereas the remote sensing data provide the spatial structure for the interpolation. The nowcasting part employs classical correlation-based motion vectors derived from previous consecutive analyses. In the case of precipitation the nowcast includes an intensity-dependent elevation effect. After 2-6 h of forecast time the nowcast is merged into an NWP forecast provided by a limited-area model, using a predefined temporal weighting function. Cross validation of the analysis and verification of the nowcast are performed. Analysis quality is high for temperature, but comparatively low for wind and precipitation, because of the limited representativeness of station data in mountainous terrain, which can be only partially compensated by the analysis algorithm. Significant added value of the system compared to the NWP forecast is found in the first few hours of the nowcast. At longer lead times the effects of the latest observations becomes small, but in the case of temperature the downscaling of the NWP forecast within the INCA system continues to provide some improvement compared to the direct NWP output.
Abstract. The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-theart relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
We use cosmography to present constraints on the kinematics of the Universe, without postulating any underlying theoretical model. To this end, we use a Monte Carlo Markov Chain analysis to perform comparisons to the supernova Ia Union 2 compilation, combined with the Hubble Space Telescope measurements of the Hubble constant, and the Hubble parameter datasets. We introduce a sixth order cosmographic parameter and show that it does not enlarge considerably the posterior distribution when comparing to the fifth order results. We also propose a way to construct viable parameter variables to be used as alternatives of the redshift z. These can overcome both the problems of divergence and lack of accuracy associated with the use of z. Moreover, we show that it is possible to improve the numerical fits by re-parameterizing the cosmological distances. In addition, we constrain the equation of state of the Universe as a whole by the use of cosmography. Thus, we derive expressions which can be directly used to fit the equation of state and the pressure derivatives up to fourth order. To this end, it is necessary to depart from a pure cosmographic analysis and to assume the Friedmann equations as valid. All our results are consistent with the ΛCDM model, although alternative fluid models, withnearly constant pressure and no cosmological constant, match the results accurately as well.PACS numbers: 98.80.Jk, 98.80.Es
Cosmography is used in cosmological data processing in order to constrain the kinematics of the universe in a model-independent way, providing an objective means to evaluate the agreement of a model with observations. In this paper, we extend the conventional methodology of cosmography employing Taylor expansions of observables by an alternative approach using Padé approximations. Due to the superior convergence properties of Padé expansions, it is possible to improve the fitting analysis to obtain numerical values for the parameters of the cosmographic series. From the results, we can derive the equation of state parameter of the universe and its first derivative and thus acquire information about the thermodynamic state of the universe. We carry out statistical analyses using observations of the distance modulus of type 1a supernovae, provided by the union 2.1 compilation of the supernova cosmology project, employing a Markov chain Monte Carlo approach with an implemented Metropolis algorithm. We compare the results of the original Taylor approach to the newly introduced Padé formalism. The analyses show that experimental data constrain the observable universe well, finding an accelerating universe and a positive jerk parameter. We demonstrate that the Padé convergence radii are greater than standard Taylor convergence radii, and infer a lower limit on the acceleration of the universe solely by requiring the positivity of the Padé expansion. We obtain fairly good agreement with the Planck results, confirming the ΛCDM model at small redshifts, although we cannot exclude a dark energy density varying in time with negligible speed of sound.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.