It is commonly argued that an undesirable feature of a theoretical or phenomenological model is that salient observables are sensitive to values of parameters in the model. But in what sense is it undesirable to have such 'fine-tuning' of observables (and hence of the underlying model)? In this paper, we argue that the fine-tuning can be interpreted as a shortcoming of the explanatory capacity of the model: in particular it signals a lack of explanatory depth. In support of this argument, we develop a scheme-for models that arise broadly in the sciences-that quantitatively relates fine-tuning of observables described by these models to a lack of depth of explanations based on these models. A significant aspect of our scheme is that, broadly speaking, the inclusion of larger numbers of parameters in a model will decrease the depth of the corresponding explanation. To illustrate our scheme, we apply it in two different settings in which, within each setting, we compare the depth of two competing explanations. The first setting involves explanations for the Euclidean nature of spatial slices of the universe today: in particular, we compare an explanation provided by the big-bang model of the early 1970s (namely, a cosmological model that traces the evolution of the universe back to a singularity without encountering an inflationary period) with an explanation provided by a general model of cosmic inflation.The second setting has a more phenomenological character, where the goal is to infer from a limited sequence of data points, using maximum entropy techniques, the underlying probability distribution from which these data are drawn. In both of these settings we find that our analysis favors the model that intuitively provides the deeper explanation of the observable(s) of interest. We thus provide an account that unifies two 'theoretical virtues' of models used broadly in the sciences-namely, a lack of fine-tuning and explanatory depth-to show that, indeed, finely tuned models sacrifice explanatory depth.
ContentsI. Introduction II. Explanatory depth from a lack of fine-tuning III. Applications of our approach A. The flatness problem in cosmology B. Model selection in maximum-entropy modeling IV. Discussion