The ability of climate models to simulate large-scale temperature changes during the twentieth century when they include both anthropogenic and natural forcings and their inability to account for warming over the last 50 yr when they exclude increasing greenhouse gas concentrations has been used as evidence for an anthropogenic influence on global warming. One criticism of the models used in many of these studies is that they exclude some forcings of potential importance, notably from fossil fuel black carbon, biomass smoke, and land use changes. Herein transient simulations with a new model, the Hadley Centre Global Environmental Model version 1 (HadGEM1), are described, which include these forcings in addition to other anthropogenic and natural forcings, and a fully interactive treatment of atmospheric sulfur and its effects on clouds. These new simulations support previous work by showing that there was a significant anthropogenic influence on near-surface temperature change over the last century. They demonstrate that black carbon and land use changes are relatively unimportant for explaining global mean near-surface temperature changes.The pattern of warming in the troposphere and cooling in the stratosphere that has been observed in radiosonde data since 1958 can only be reproduced when the model includes anthropogenic forcings. However, there are some discrepancies between the model simulations and radiosonde data, which are largest where observational uncertainty is greatest in the Tropics and high latitudes.Predictions of future warming have also been made using the new model. Twenty-first-century warming rates, following policy-relevant emissions scenarios, are slightly greater in HadGEM1 than in the Third Hadley Centre Coupled Ocean-Atmosphere General Circulation Model (HadCM3) as a result of the extra forcing in HadGEM1. An experiment in which greenhouse gases and other anthropogenic forcings are stabilized at 2100 levels and held constant until 2200 predicts a committed twenty-second-century warming of less than 1 K, whose spatial distribution resembles that of warming during the twenty-first century, implying that the local feedbacks that determine the pattern of warming do not change significantly.
ABSTRACT:The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the 'split time-stepping' scheme takes advantage of the independent nature of the monochromatic calculations of the 'correlated-k' method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the 'incremental time-stepping' scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The 'incremental time-stepping' scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.The predictive quality of an ensemble model of cirrus ice crystals to model passive and active measurements of ice cloud, from the ultraviolet (UV) to the microwave, is tested. The ensemble model predicts m ∝ D 2 , where D is the maximum dimension of the ice crystal, and m is its mass. This predicted m-D relationship is applied to a moment estimation parametrization of the particle size distribution (PSD), to estimate the PSD shape, given ice water content (IWC) and in-cloud temperature. The same microphysics is applied across the electromagnetic spectrum to model UV, infrared, microwave and radar observations. The short-wave measurements consist of airborne UV backscatter lidar (light detection and ranging) estimates of the volume extinction coefficient, total solar optical depth, and spacebased multi-directional spherical albedo retrievals, at 0.865 µm, between the scattering angles 85 • and 125 • . The airborne long-wave measurements consist of high-resolution interferometer upwelling brightness temperatures, obtained between the wavelengths of about 3.45 µm and 4.1 µm, and 8.0 µm to 12.0 µm. The low-frequency measurements consist of ground-based Chilbolton 35 GHz radar reflectivity measurements and spacebased upwelling 190 GHz brightness temperature measurements. The predictive quality of the ensemble model is demonstrated to be generally within the experimental uncertainty of the lidar backscatter estimates of the volume extinction coefficient and total solar optical depth. The ensemble model prediction of the high-resolution brightness temperature measurements is generally within ±2 K and ±1 K at solar and infrared wavelengths, respectively. The 35 GHz radar reflectivity and 190 GHz brightness temperatures are generally simulated to within ±2 dBZ e , and ±2 K, respectively. The directional spherical albedo observations suggest that the scattering phase function of the most randomized ensemble model gives the best fit to the measurements (generally within ±3%). This article demonstrates that the ensemble model, assuming the same microphysics, is physically consistent across the electromagnetic spectrum.
Galactic magnetic fields are, typically, modelled by meanfield dynamos involving the a-effect. Here we consider, very briefly, some of the issues involving the nonlinear dependence of a on the mean field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.