Rapidly and selectively modulating the activity of defined neurons in unrestrained animals is a powerful approach in investigating the circuit mechanisms that shape behavior. In Drosophila melanogaster, temperature-sensitive silencers and activators are widely used to control the activities of genetically defined neuronal cell types. A limitation of these thermogenetic approaches, however, has been their poor temporal resolution. Here we introduce FlyMAD (the fly mind-altering device), which allows thermogenetic silencing or activation within seconds or even fractions of a second. Using computer vision, FlyMAD targets an infrared laser to freely walking flies. As a proof of principle, we demonstrated the rapid silencing and activation of neurons involved in locomotion, vision and courtship. The spatial resolution of the focused beam enabled preferential targeting of neurons in the brain or ventral nerve cord. Moreover, the high temporal resolution of FlyMAD allowed us to discover distinct timing relationships for two neuronal cell types previously linked to courtship song.
Visual object fixation and figure-ground discrimination in Drosophila are robust behaviors requiring sophisticated computation by the visual system, yet the neural substrates remain unknown. Recent experiments in walking flies revealed object fixation behavior mediated by circuitry independent from the motion-sensitive T4-T5 cells required for wide-field motion responses. In tethered flight experiments under closed-loop conditions, we found similar results for one feedback gain, whereas intact T4-T5 cells were necessary for robust object fixation at a higher feedback gain and in figure-ground discrimination tasks. We implemented dynamical models (available at http://strawlab.org/asymmetric-motion/) based on neurons downstream of T4-T5 cells—one a simple phenomenological model and another, physiologically more realistic model—and found that both predict key features of stripe fixation and figure-ground discrimination and are consistent with a classical formulation. Fundamental to both models is motion asymmetry in the responses of model neurons, whereby front-to-back motion elicits stronger responses than back-to-front motion. When a bilateral pair of such model neurons, based on well-understood horizontal system cells, downstream of T4-T5, is coupled to turning behavior, asymmetry leads to object fixation and figure-ground discrimination in the presence of noise. Furthermore, the models also predict fixation in front of a moving background, a behavior previously suggested to require an additional pathway. Thus, the models predict several aspects of object responses on the basis of neurons that are also thought to serve a key role in background stabilization.
In pre-clinical pathology, there is a paradox between the abundance of raw data (whole slide images from many organs of many individual animals) and the lack of pixel-level slide annotations done by pathologists. Due to time constraints and requirements from regulatory authorities, diagnoses are instead stored as slide labels. Weakly supervised training is designed to take advantage of those data, and the trained models can be used by pathologists to rank slides by their probability of containing a given lesion of interest.In this work, we propose a novel contextualized eXplainable AI (XAI) framework and its application to deep learning models trained on Whole Slide Images (WSIs) in Digital Pathology. Specifically, we apply our methods to a multi-instance-learning (MIL) model, which is trained solely on slide-level labels, without the need for pixel-level annotations. We validate quantitatively our methods by quantifying the agreements of our explanations' heatmaps with pathologists' annotations, as well as with predictions from a segmentation model trained on such annotations. We demonstrate the stability of the explanations with respect to input shifts, and the fidelity with respect to increased model performance. We quantitatively evaluate the correlation between available pixel-wise annotations and explainability heatmaps. We show that the explanations on important tiles of the whole slide correlate with tissue changes between healthy regions and lesions, but do not exactly behave like a human annotator. This result is coherent with the model training strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.