Each time a locomoting fly turns, the visual image sweeps over the retina and generates a motion stimulus. Classic behavioral experiments suggested that flies use active neural-circuit mechanisms to suppress the perception of self-generated visual motion during intended turns. Direct electrophysiological evidence, however, has been lacking. We found that visual neurons in Drosophila receive motor-related inputs during rapid flight turns. These inputs arrived with a sign and latency appropriate for suppressing each targeted cell's visual response to the turn. Precise measurements of behavioral and neuronal response latencies supported the idea that motor-related inputs to optic flow-processing cells represent internal predictions of the expected visual drive induced by voluntary turns. Motor-related inputs to small object-selective visual neurons could reflect either proprioceptive feedback from the turn or internally generated signals. Our results in Drosophila echo the suppression of visual perception during rapid eye movements in primates, demonstrating common functional principles of sensorimotor processing across phyla.
Vision influences behavior, but ongoing behavior also modulates vision in animals ranging from insects to primates. The function and biophysical mechanisms of most such modulations remain unresolved. Here, we combine behavioral genetics, electrophysiology, and high-speed videography to advance a function for behavioral modulations of visual processing in Drosophila. We argue that a set of motion-sensitive visual neurons regulate gaze-stabilizing head movements. We describe how, during flight turns, Drosophila perform a set of head movements that require silencing their gaze-stability reflexes along the primary rotation axis of the turn. Consistent with this behavioral requirement, we find pervasive motor-related inputs to the visual neurons, which quantitatively silence their predicted visual responses to rotations around the relevant axis while preserving sensitivity around other axes. This work proposes a function for a behavioral modulation of visual processing and illustrates how the brain can remove one sensory signal from a circuit carrying multiple related signals.
The lack of a deeper understanding of how olfactory sensory neurons (OSNs) encode odors has hindered the progress in understanding the olfactory signal processing in higher brain centers. Here we employ methods of system identification to investigate the encoding of time-varying odor stimuli and their representation for further processing in the spike domain by Drosophila OSNs. In order to apply system identification techniques, we built a novel low-turbulence odor delivery system that allowed us to deliver airborne stimuli in a precise and reproducible fashion. The system provides a 1% tolerance in stimulus reproducibility and an exact control of odor concentration and concentration gradient on a millisecond time scale. Using this novel setup, we recorded and analyzed the in-vivo response of OSNs to a wide range of time-varying odor waveforms. We report for the first time that across trials the response of OR59b OSNs is very precise and reproducible. Further, we empirically show that the response of an OSN depends not only on the concentration, but also on the rate of change of the odor concentration. Moreover, we demonstrate that a two-dimensional (2D) Encoding Manifold in a concentration-concentration gradient space provides a quantitative description of the neuron's response. We then use the white noise system identification methodology to construct one-dimensional (1D) and two-dimensional (2D) Linear-Nonlinear-Poisson (LNP) cascade models of the sensory neuron for a fixed mean odor concentration and fixed contrast. We show that in terms of predicting the intensity rate of the spike train, the 2D LNP model performs on par with the 1D LNP model, with a root mean-square error (RMSE) increase of about 5 to 10%. Surprisingly, we find that for a fixed contrast of the white noise odor waveforms, the nonlinear block of each of the two models changes with the mean input concentration. The shape of the nonlinearities of both the 1D and the 2D LNP model appears to be, for a fixed mean of the odor waveform, independent of the stimulus contrast. This suggests that white noise system identification of Or59b OSNs only depends on the first moment of the odor concentration. Finally, by comparing the 2D Encoding Manifold and the 2D LNP model, we demonstrate that the OSN identification results depend on the particular type of the employed test odor waveforms. This suggests an adaptive neural encoding model for Or59b OSNs that changes its nonlinearity in response to the odor concentration waveforms.
Temporal experience of odor gradients is important in spatial orientation of animals. The fruit fly Drosophila melanogaster exhibits robust odor-guided behaviors in an odor gradient field. In order to investigate how early olfactory circuits process temporal variation of olfactory stimuli, we subjected flies to precisely defined odor concentration waveforms and examined spike patterns of olfactory sensory neurons (OSNs) and projection neurons (PNs). We found a significant temporal transformation between OSN and PN spike patterns, manifested by the PN output strongly signaling the OSN spike rate and its rate of change. A simple two-dimensional model admitting the OSN spike rate and its rate of change as inputs closely predicted the PN output. When cascaded with the rate-of-change encoding by OSNs, PNs primarily signal the acceleration and the rate of change of dynamic odor stimuli to higher brain centers, thereby enabling animals to reliably respond to the onsets of odor concentrations.DOI: http://dx.doi.org/10.7554/eLife.06651.001
Highlights d Optic-flow processing neurons are suppressed during loomevoked flight turns d This suppression cuts signaling of head-movement-induced visual motion during turns d The cells are not suppressed during optomotor responses to rotational visual motion d Suppression thus occurs during course-changing, but not course-stabilizing, turns
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.