A large number of processes are involved in the chain from emissions of aerosol precursor gases and primary particles to impacts on cloud radiative forcing. Those processes are manifest in a number of relationships that can be expressed as factors dlnX/dlnY driving aerosol effects on cloud radiative forcing. These factors include the relationships between cloud condensation nuclei (CCN) concentration and emissions, droplet number and CCN concentration, cloud fraction and droplet number, cloud optical depth and droplet number, and cloud radiative forcing and cloud optical depth. The relationship between cloud optical depth and droplet number can be further decomposed into the sum of two terms involving the relationship of droplet effective radius and cloud liquid water path with droplet number. These relationships can be constrained using observations of recent spatial and temporal variability of these quantities. However, we are most interested in the radiative forcing since the preindustrial era. Because few relevant measurements are available from that era, relationships from recent variability have been assumed to be applicable to the preindustrial to present-day change. Our analysis of Aerosol Comparisons between Observations and Models (AeroCom) model simulations suggests that estimates of relationships from recent variability are poor constraints on relationships from anthropogenic change for some terms, with even the sign of some relationships differing in many regions. Proxies connecting recent spatial/temporal variability to anthropogenic change, or sustained measurements in regions where emissions have changed, are needed to constrain estimates of anthropogenic aerosol impacts on cloud radiative forcing.
Abstract. The activation of aerosols to form cloud droplets is dependent upon vertical velocities whose local variability is not typically resolved at the GCM grid scale. Consequently, it is necessary to represent the subgrid-scale variability of vertical velocity in the calculation of cloud droplet number concentration.This study uses the UK Chemistry and Aerosols community model (UKCA) within the Hadley Centre Global Environmental Model (HadGEM3), coupled for the first time to an explicit aerosol activation parameterisation, and hence known as UKCA-Activate. We explore the range of uncertainty in estimates of the indirect aerosol effects attributable to the choice of parameterisation of the subgrid-scale variability of vertical velocity in HadGEM-UKCA. Results of simulations demonstrate that the use of a characteristic vertical velocity cannot replicate results derived with a distribution of vertical velocities, and is to be discouraged in GCMs.This study focuses on the effect of the variance (σ 2 w ) of a Gaussian pdf (probability density function) of vertical velocity. Fixed values of σ w (spanning the range measured in situ by nine flight campaigns found in the literature) and a configuration in which σ w depends on turbulent kinetic energy are tested. Results from the mid-range fixed σ w and TKE-based configurations both compare well with observed vertical velocity distributions and cloud droplet number concentrations.The radiative flux perturbation due to the total effects of anthropogenic aerosol is estimated at −1.9 W m −2 with σ w = 0.1 m s −1 , −2.1 W m −2 with σ w derived from TKE, −2.25 W m −2 with σ w = 0.4 m s −1 , and −2.3 W m −2 with σ w = 0.7 m s −1 . The breadth of this range is 0.4 W m −2 , which is comparable to a substantial fraction of the total diversity of current aerosol forcing estimates. Reducing the uncertainty in the parameterisation of σ w would therefore be an important step towards reducing the uncertainty in estimates of the indirect aerosol effects.Detailed examination of regional radiative flux perturbations reveals that aerosol microphysics can be responsible for some climate-relevant radiative effects, highlighting the importance of including microphysical aerosol processes in GCMs.
In this paper we address the problem of constructing reliable neural-net implementations, given the assumption that any particular implementation will not be totally correct. The approach taken in this paper is to organize the inevitable errors so as to minimize their impact in the context of a multiversion system, i.e., the system functionality is reproduced in multiple versions, which together will constitute the neural-net system. The unique characteristics of neural computing are exploited in order to engineer reliable systems in the form of diverse, multiversion systems that are used together with a "decision strategy" (such as majority vote). Theoretical notions of "methodological diversity" contributing to the improvement of system performance are implemented and tested. An important aspect of the engineering of an optimal system is to overproduce the components and then choose an optimal subset. Three general techniques for choosing final system components are implemented and evaluated. Several different approaches to the effective engineering of complex multiversion systems designs are realized and evaluated to determine overall reliability as well as reliability of the overall system in comparison to the lesser reliability of component substructures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.