The microscopically complicated real world exhibits behavior that often yields to simple yet quantitatively accurate descriptions. Predictions are possible despite large uncertainties in microscopic parameters, both in physics and in multiparameter models in other areas of science. We connect the two by analyzing parameter sensitivities in a prototypical continuum theory (diffusion) and at a self-similar critical point (the Ising model). We trace the emergence of an effective theory for long-scale observables to a compression of the parameter space quantified by the eigenvalues of the Fisher Information Matrix. A similar compression appears ubiquitously in models taken from diverse areas of science, suggesting that the parameter space structure underlying effective continuum and universal theories in physics also permits predictive modeling more generally.
Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are 'sloppy', i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher Information Matrix, which we interpret as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. We show how the manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.
Parameter estimation by nonlinear least squares minimization is a common problem that has an elegant geometric interpretation: the possible parameter values of a model induce a manifold within the space of data predictions. The minimization problem is then to find the point on the manifold closest to the experimental data. We show that the model manifolds of a large class of models, known as sloppy models, have many universal features; they are characterized by a geometric series of widths, extrinsic curvatures, and parameter-effects curvatures, which we describe as a hyper-ribbon. A number of common difficulties in optimizing least squares problems are due to this common geometric structure. First, algorithms tend to run into the boundaries of the model manifold, causing parameters to diverge or become unphysical before they have been optimized. We introduce the model graph as an extension of the model manifold to remedy this problem. We argue that appropriate priors can remove the boundaries and further improve the convergence rates. We show that typical fits will have many evaporated parameters unless the data are very accurately known. Second, 'bare' model parameters are usually ill-suited to describing model behavior; cost contours in parameter space tend to form hierarchies of plateaus and long narrow canyons. Geometrically, we understand this inconvenient parameterization as an extremely skewed coordinate basis and show that it induces a large parameter-effects curvature on the manifold. By constructing alternative coordinates based on geodesic motion, we show that these long narrow canyons are transformed in many cases into a single quadratic, isotropic basin. We interpret the modified Gauss-Newton and Levenberg-Marquardt fitting algorithms as an Euler approximation to geodesic motion in these natural coordinates on the model manifold and the model graph respectively. By adding a geodesic acceleration adjustment to these algorithms, we alleviate the difficulties from parameter-effects curvature, improving both efficiency and success rates at finding good fits.
Fitting model parameters to experimental data is a common yet often challenging task, especially if the model contains many parameters. Typically, algorithms get lost in regions of parameter space in which the model is unresponsive to changes in parameters, and one is left to make adjustments by hand. We explain this difficulty by interpreting the fitting process as a generalized interpolation procedure. By considering the manifold of all model predictions in data space, we find that cross sections have a hierarchy of widths and are typically very narrow. Algorithms become stuck as they move near the boundaries. We observe that the model manifold, in addition to being tightly bounded, has low extrinsic curvature, leading to the use of geodesics in the fitting process. We improve the convergence of the Levenberg-Marquardt algorithm by adding geodesic acceleration to the usual step.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.