Retina ribbon synapses are the first synapses in the visual system. Unlike the conventional synapses in the central nervous system triggered by action potentials, ribbon synapses are uniquely driven by graded membrane potentials and are thought to transfer early sensory information faithfully. However, how ribbon synapses compress the visual signals and contribute to visual adaptation in retina circuits is less understood. To this end, we introduce a physiologically constrained module for the ribbon synapse, termed Ribbon Adaptive Block (RAB), and an extended "hierarchical Linear-Nonlinear-Synapse" (hLNS) framework for the retina circuit. Our models can elegantly reproduce a wide range of experimental recordings on synaptic and circuit-level adaptive behaviors across different cell types and species. In particular, it shows strong robustness to unseen stimulus protocols. Intriguingly, when using the hLNS framework to fit intra-cellular recordings from the retina circuit under stimuli similar to natural conditions, we revealed rich and diverse adaptive time constants of ribbon synapses. Furthermore, we predicted a frequency-sensitive gain-control strategy for the synapse between the photoreceptor and the CX bipolar cell, which differ from the classic contrast-based strategy in retina circuits. Overall, our framework provides a powerful analytical tool for exploring synaptic adaptation mechanisms in early sensory coding.
Computational neural models are essential tools for neuroscientists to study the functional roles of single neurons or neural circuits. With the recent advances in experimental techniques, there is a growing demand to build up neural models at single neuron or large-scale circuit levels. A long-standing challenge to build up such models lies in tuning the free parameters of the models to closely reproduce experimental recordings. There are many advanced machine-learning-based methods developed recently for parameter tuning, but many of them are task-specific or requires onerous manual interference. There lacks a general and fully-automated method since now. Here, we present a Long Short-Term Memory (LSTM)-based deep learning method, General Neural Estimator (GNE), to fully automate the parameter tuning procedure, which can be directly applied to both single neuronal models and large-scale neural circuits. We made comprehensive comparisons with many advanced methods, and GNE showed outstanding performance on both synthesized data and experimental data. Finally, we proposed a roadmap centered on GNE to help guide neuroscientists to computationally reconstruct single neurons and neural circuits, which might inspire future brain reconstruction techniques and corresponding experimental design. The code of our work will be publicly available upon acceptance of this paper.
Computational modeling is an essential approach in neuroscience for linking neural mechanisms to experimental observations. Recent advanced machine learning techniques, such as deep learning, leverage synthetic data generated from computational models to reveal underlying neural mechanisms from experimental data. However, despite significant progress, one unsolved problem in these methods is that the synthetic data differ substantially from experimental data, leading to severely biased results. To this end, we introduce the Domain Adaptive Neural Inference framework to construct synthetic data that closely resemble the distribution of experimental data and use the matching synthetic data to predict the neural mechanisms of experimental data. We demonstrate the accuracy, efficiency, and versatility of our framework in various experimental observations, including inferring single-neuron biophysics across mouse brain regions from intracellular recordings in the Allen Cell Types Database; inferring biophysical properties of a microcircuit of Cancer Borealis from extracellular recordings; and inferring monosynaptic connectivity of mouse CA1 networks from in vivo multi-electrode extracellular recordings. The framework outperforms state-of-the-art methods in every application, and can potentially be generalized to a wide range of computational modeling approaches in biosciences.
Connectomics is a developing field aiming at reconstructing the connection of the neural system at the nanometer scale. Computer vision technology, especially deep learning methods used in image processing, has promoted connectomic data analysis to a new era. However, the performance of the state-of-the-art (SOTA) methods still falls behind the demand of scientific research. Inspired by the success of ImageNet, we present an annotated ultra-high resolution image segmentation dataset for cell membrane (U-RISC), which is the largest cell membrane-annotated electron microscopy (EM) dataset with a resolution of 2.18 nm/pixel. Multiple iterative annotations ensured the quality of the dataset. Through an open competition, we reveal that the performance of current deep learning methods still has a considerable gap from the human level, different from ISBI 2012, on which the performance of deep learning is closer to the human level. To explore the causes of this discrepancy, we analyze the neural networks with a visualization method, which is an attribution analysis. We find that the U-RISC requires a larger area around a pixel to predict whether the pixel belongs to the cell membrane or not. Finally, we integrate the currently available methods to provide a new benchmark (0.67, 10% higher than the leader of the competition, 0.61) for cell membrane segmentation on the U-RISC and propose some suggestions in developing deep learning algorithms. The U-RISC dataset and the deep learning codes used in this study are publicly available.
Connectomics is a developing filed aiming at reconstructing the connection of neural system at nanometer scale. Computer vision technology, especially deep learning methods used in image processing, has promoted connectomic data analysis to a new era. Even though, the performance of the state-of-the-art methods is still fall behind the demand of researchers. To alleviate this situation, here we first introduce a new annotated Ultra-high Resolution Image Segmentation dataset for the Cell membrane, called U-RISC, which is the largest annotated Electron Microscopy (EM) dataset for the cell membrane with multiple iterative annotations and the resolution of 2.18nm/pixel. Then we reveal the performance of existing deep learning segmentation methods on U-RISC through an open competition. The performance of participants appears to have a huge gap with human-level, however, the results of same methods on ISBI 2012, a smaller EM dataset, are near-human. To further explore the differences between the performance of two datasets, we analyze the neural networks with attribution analysis and uncover the larger decision-making area in the segmentation of U-RISC. Our work provides a new benchmark data for EM cell membrane segmentation and proposes some perspectives in deep learning segmentation algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.