Terrestrial locomotion requires animals to coordinate their limb movements to efficiently traverse their environment. While previous studies in hexapods have reported that limb coordination patterns can vary substantially, the structure of this variability is not yet well understood. Here, we characterized the symmetric and asymmetric components of variation in walking kinematics in the genetic model organism Drosophila. We found that Drosophila use a single continuum of coordination patterns without evidence for preferred configurations. Spontaneous symmetric variability was associated with modulation of a single control parameter—stance duration—while asymmetric variability consisted of small, limb-specific modulations along multiple dimensions of the underlying symmetric pattern. Commands that modulated walking speed, originating from artificial neural activation or from the visual system, evoked modulations consistent with spontaneous behavior. Our findings suggest that Drosophila employ a low-dimensional control architecture, which provides a framework for understanding the neural circuits that regulate hexapod legged locomotion.
Direction-selective neurons respond to visual motion in a preferred direction. They are directionopponent if they are also inhibited by motion in the opposite direction. In flies and vertebrates, direction opponency has been observed in second-order direction-selective neurons, which achieve this opponency by subtracting signals from first-order direction-selective cells with opposite directional tunings. Here, we report direction opponency in Drosophila that emerges in first-order direction-selective neurons, the elementary motion detectors T4 and T5. This opponency persists when synaptic output from these cells is blocked, suggesting that it arises from feedforward, not feedback, computations. These observations exclude a broad class of linear-nonlinear models that have been proposed to describe direction-selective computations. However, they are consistent with models that include dynamic nonlinearities. Simulations of opponent models suggest that direction opponency in first-order motion detectors improves motion discriminability by suppressing noise generated by the local structure of natural scenes. Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use:
Visual motion estimation is a canonical neural computation. In Drosophila, recent advances have identified anatomic and functional circuitry underlying direction-selective computations. Models with varying levels of abstraction have been proposed to explain specific experimental results but have rarely been compared across experiments. Here we use the wealth of available anatomical and physiological data to construct a minimal, biophysically inspired synaptic model for Drosophila's first-order direction-selective T4 cells. We show how this model relates mathematically to classical models of motion detection, including the Hassenstein-Reichardt correlator model. We used numerical simulation to test how well this synaptic model could reproduce measurements of T4 cells across many datasets and stimulus modalities. These comparisons include responses to sinusoid gratings, to apparent motion stimuli, to stochastic stimuli, and to natural scenes. Without fine-tuning this model, it sufficed to reproduce many, but not all, response properties of T4 cells. Since this model is flexible and based on straightforward biophysical properties, it provides an extensible framework for developing a mechanistic understanding of T4 neural response properties. Moreover, it can be used to assess the sufficiency of simple biophysical mechanisms to describe features of the direction-selective computation and identify where our understanding must be improved.Citation: Zavatone-Veth, J. A.,
Recent works have suggested that finite Bayesian neural networks may outperform their infinite cousins because finite networks can flexibly adapt their internal representations. However, our theoretical understanding of how the learned hidden layer representations of finite networks differ from the fixed representations of infinite networks remains incomplete. Perturbative finite-width corrections to the network prior and posterior have been studied, but the asymptotics of learned features have not been fully characterized. Here, we argue that the leading finite-width corrections to the average feature kernels for any Bayesian network with linear readout and quadratic cost have a largely universal form. We illustrate this explicitly for two classes of fully connected networks: deep linear networks and networks with a single nonlinear hidden layer. Our results begin to elucidate which features of data wide Bayesian neural networks learn to represent.
Bayesian neural networks are theoretically well-understood only in the infinite-width limit, where Gaussian priors over network weights yield Gaussian priors over network outputs. Recent work has suggested that finite Bayesian networks may outperform their infinite counterparts, but their non-Gaussian output priors have been characterized only though perturbative approaches. Here, we derive exact solutions for the output priors for individual input examples of a class of finite fully-connected feedforward Bayesian neural networks. For deep linear networks, the prior has a simple expression in terms of the Meijer G-function. The prior of a finite ReLU network is a mixture of the priors of linear networks of smaller widths, corresponding to different numbers of active units in each layer. Our results unify previous descriptions of finite network priors in terms of their tail decay and large-width behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.