Abstract:The article surveys and extends variational formulations of the thermodynamic free energy and discusses their information-theoretic content from the perspective of mathematical statistics. We revisit the well-known Jarzynski equality for nonequilibrium free energy sampling within the framework of importance sampling and Girsanov change-of-measure transformations. The implications of the different variational formulations for designing efficient stochastic optimization and nonequilibrium simulation algorithms for computing free energies are discussed and illustrated.
Optimal control of diffusion processes is intimately connected to the problem of solving certain Hamilton–Jacobi–Bellman equations. Building on recent machine learning inspired approaches towards high-dimensional PDEs, we investigate the potential of iterative diffusion optimisation techniques, in particular considering applications in importance sampling and rare event simulation, and focusing on problems without diffusion control, with linearly controlled drift and running costs that depend quadratically on the control. More generally, our methods apply to nonlinear parabolic PDEs with a certain shift invariance. The choice of an appropriate loss function being a central element in the algorithmic design, we develop a principled framework based on divergences between path measures, encompassing various existing methods. Motivated by connections to forward-backward SDEs, we propose and study the novel log-variance divergence, showing favourable properties of corresponding Monte Carlo estimators. The promise of the developed approach is exemplified by a range of high-dimensional and metastable numerical examples.
We propose an adaptive importance sampling scheme for the simulation of rare events when the underlying dynamics is given by a diffusion. The scheme is based on a Gibbs variational principle that is used to determine the optimal (i.e. zerovariance) change of measure and exploits the fact that the latter can be rephrased as a stochastic optimal control problem. The control problem can be solved by a stochastic approximation algorithm, using the Feynman-Kac representation of the associated dynamic programming equations, and we discuss numerical aspects for high-dimensional problems along with simple toy examples.When computing small probabilities associated with rare events by Monte Carlo it so happens that the variance of the estimator is of the same order as the quantity of interest. Importance sampling is a means to reduce the variance of the Monte Carlo estimator by sampling from an alternative probability distribution under which the rare event is no longer rare. The estimator must then be corrected by an appropriate reweighting that depends on the likelihood ratio between the two distributions and, depending on this change of measure, the variance of the estimator may easily increase rather than decrease. e.g. when the two probability distributions are (almost) non-overlapping. The Gibbs variational principle links the cumulant generating function (or: free energy) of a random variable with an entropy minimisation principle, and it characterises a probability measure that leads to importance sampling estimators with minimum variance. When the underlying probability measure is the law of a diffusion process, the variational principle can be rephrased as a stochastic optimal control problem, with the optimal control inducing the change of measure that minimises the variance. In this article, we discuss the properties of the control problem and propose a numerical method to solve it. The numerical method is based on a nonlinear Feynman-Kac representation of the underlying dynamic programming equation in terms of a pair of forward-backward stochastic differential equations that can be solved by least-squares regression. At first glance solving a stochastic control problem may be more difficult than the original sampling problem, however it turns out that the reformulation of the sampling problem opens a completely new toolbox of numerical methods and approximation algorithms that can be combined with Monte Carlo sampling in a iterative fashion and thus leads to efficient algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.