Observationally informed development of a new framework for bulk rain microphysics, the Bayesian Observationally Constrained Statistical–Physical Scheme (BOSS; described in Part I of this study), is demonstrated. This scheme’s development is motivated by large uncertainties in cloud and weather simulations associated with approximations and assumptions in existing microphysics schemes. Here, a proof-of-concept study is presented using a Markov chain Monte Carlo sampling algorithm with BOSS to probabilistically estimate microphysical process rates and parameters directly from a set of synthetically generated rain observations. The framework utilized is an idealized steady-state one-dimensional column rainshaft model with specified column-top rain properties and a fixed thermodynamical profile. Different configurations of BOSS—flexibility being a key feature of this approach—are constrained via synthetic observations generated from a traditional three-moment bulk microphysics scheme. The ability to retrieve correct parameter values when the true parameter values are known is illustrated. For cases when there is no set of true parameter values, the accuracy of configurations of BOSS that have different levels of complexity is compared. It is found that addition of the sixth moment as a prognostic variable improves prediction of the third moment (proportional to bulk rain mass) and rain rate. In contrast, increasing process rate formulation complexity by adding more power terms has little benefit—a result that is explained using further-idealized experiments. BOSS rainshaft simulations are shown to well estimate the true process rates from constraint by bulk rain observations, with the additional benefit of rigorously quantified uncertainty of these estimates.
Abstract. Many applications in science require that computational models and data be combined. In a Bayesian framework, this is usually done by defining likelihoods based on the mismatch of model outputs and data. However, matching model outputs and data in this way can be unnecessary or impossible. For example, using large amounts of steady state data is unnecessary because these data are redundant, it is numerically difficult to assimilate data in chaotic systems, and it is often impossible to assimilate data of a complex system into a low-dimensional model. These issues can be addressed by selecting features of the 5 data, and defining likelihoods based on the features, rather than by the usual mismatch of model output and data. Our goal is to contribute to a fundamental understanding of such a feature-based approach that allows us to assimilate selected aspects of data into models. Specifically, we explain how the feature-based approach can be interpreted as a method for reducing an effective dimension, and derive new noise models, based on perturbed observations, that lead to computationally efficient solutions.Numerical implementations of our ideas are illustrated in four examples.
Abstract. Many applications in science require that computational models and data be combined. In a Bayesian framework, this is usually done by defining likelihoods based on the mismatch of model outputs and data. However, matching model outputs and data in this way can be unnecessary or impossible. For example, using large amounts of steady state data is unnecessary because these data are redundant. It is numerically difficult to assimilate data in chaotic systems. It is often impossible to assimilate data of a complex system into a low-dimensional model. As a specific example, consider a low-dimensional stochastic model for the dipole of the Earth's magnetic field, while other field components are ignored in the model. The above issues can be addressed by selecting features of the data, and defining likelihoods based on the features, rather than by the usual mismatch of model output and data. Our goal is to contribute to a fundamental understanding of such a feature-based approach that allows us to assimilate selected aspects of data into models. We also explain how the feature-based approach can be interpreted as a method for reducing an effective dimension and derive new noise models, based on perturbed observations, that lead to computationally efficient solutions. Numerical implementations of our ideas are illustrated in four examples.
Partially reacting candidate fuels under highly dilute conditions across a range of temperatures provides a means to classify the candidates based on traditional ignition characteristics using much lower quantities (sub-mL) than the full octane tests. Using a classifier based on a Gaussian Process model, synthetic species profiles obtained by plug flow reactor simulations at seven temperatures are used to demonstrate that the configuration can be used to classify 95% of the samples correctly for autoignition sensitivity exceeding a threshold (S ≥ 8) and 100%of the samples correctly for research octane number exceeding a threshold (RON ≥ 90). Molecular beam mass spectrometry (MBMS) experimental data at four temperatures is then used as the model input in a real-world test. Despite the nontrivial relationship between the MBMS measurements and speciation as well as experimental noise it is still possible to classify 95% of the samples correctly for RON and 85% of the samples correctly for S in a "leave-one-out" cross validation exercise. The test data set consists of 45 fuels and includes a variety of primary reference fuels, ethanol blends and other oxygenates.
Predator-prey dynamics have been suggested as simplified models of stratocumulus clouds, with rain acting as a predator of the clouds. We describe a mathematical and computational framework for estimating the parameters of a simplified model from a large eddy simulation (LES). In our method, we extract cycles of cloud growth and decay from the LES and then search for parameters of the simplified model that lead to similar cycles. We implement our method via Markov chain Monte Carlo. Required error models are constructed based on variations of the LES cloud cycles. This computational framework allows us to test the robustness of our overall approach and various assumptions, which is essential for the simplified model to be useful. Our main conclusion is that it is indeed possible to calibrate a predator-prey model so that it becomes a reliable, robust, but simplified representation of selected aspects of a LES. In the future, such models may then be used as a quantitative tool for investigating important questions in cloud microphysics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.