1 Abstract 1 A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures 2 a hypothesized neural mechanism. Such models are valuable when they give rise to an experimen-3 tally observed phenomenon -whether behavioral or in terms of neural activity -and thus can offer 4 insights into neural computation. The operation of these circuits, like all models, critically depends 5 on the choices of model parameters. Historically, the gold standard has been to analytically derive 6 the relationship between model parameters and computational properties. However, this enterprise 7 quickly becomes infeasible as biologically realistic constraints are included into the model increas-8 ing its complexity, often resulting in ad hoc approaches to understanding the relationship between 9 model and computation. We bring recent machine learning techniques -the use of deep generative 10 models for probabilistic inference -to bear on this problem, learning distributions of parameters 11 that produce the specified properties of computation. Importantly, the techniques we introduce 12 offer a principled means to understand the implications of model parameter choices on compu-13 tational properties of interest. We motivate this methodology with a worked example analyzing 14 sensitivity in the stomatogastric ganglion. We then use it to generate insights into neuron-type 15 input-responsivity in a model of primary visual cortex, a new understanding of rapid task switch- 16 ing in superior colliculus models, and attribution of error in recurrent neural networks solving a 17 simple mathematical task. More generally, this work suggests a departure from realism vs tractabil-18 ity considerations, towards the use of modern machine learning for sophisticated interrogation of 19 biologically relevant models. 20 1 2 INTRODUCTION 2 Introduction 21The fundamental practice of theoretical neuroscience is to use a mathematical model to understand 22 neural computation, whether that computation enables perception, action, or some intermediate 23 processing [1]. A neural computation is systematized with a set of equations -the model -and 24 these equations are motivated by biophysics, neurophysiology, and other conceptual considerations.
25The function of this system is governed by the choice of model parameters, which when configured 26 in a particular way, give rise to a measurable signature of a computation. The work of analyzing a 27 model then requires solving the inverse problem: given a computation of interest, how can we reason 28 about these particular parameter configurations? The inverse problem is crucial for reasoning about 29 likely parameter values, uniquenesses and degeneracies, attractor states and phase transitions, and 30 predictions made by the model. 31 Consider the idealized practice: one carefully designs a model and analytically derives how model 32 parameters govern the computation. Seminal examples of this gold standard (which often adopt 33 approaches from statistical physics) include o...