Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: 1) creating solutions to engineering problems, 2) identifying predictive variables, 3) setting benchmarks for simple models of the brain, and 4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists. Figure 1: Growth of Machine Learning in Neuroscience.Here we plot the proportion of neuroscience papers that have used ML over the last two decades. That is, we calculate the number of papers involving both neuroscience and machine learning, normalized by the total number of neuroscience papers. Neuroscience papers were identified using a search for "neuroscience" on Semantic Scholar. Papers involving neuroscience and machine learning were identified with a search for "machine learning" and "neuroscience" on Semantic Scholar.On the highest level, ML is typically divided into the subtypes of supervised, unsupervised, and reinforcement learning. Supervised learning builds a model that predicts outputs from input data. Unsupervised learning is concerned with finding structure in data, e.g. clustering, dimensionality reduction, and compression. Reinforcement learning allows a system to learn the best actions based on the reward that occurs at an end of a sequence of actions. This review focuses on supervised learning.Why is creating progressively more accurate regression or classification methods (see Box 1) worthy of a title like 'The AI Revolution' (Appenzeller 2017) ? It is because countless questions can be framed in this manner. When classifying images, an input picture can be used to predict the object in the picture. When playing a game, the setup of the board (input) can be used to predict an optimal move (output). When texting on our smartphones, our current text is used to create suggestions of the next word. Similarly, science has many instances where we desire to make predictions from measured data. Figure 2: Examples of the four roles of supervised machine learning in neuroscience.1 -ML can solve engineering problems . For example, it can help researchers control a prosthetic limb using brain activity. 2 -ML can identify predictive variables . For example, by using MRI data, we can identify which brain regions are most predictive for diagnosing Alzheimer's disease (Lebedev et al. 2014) . 3 -ML can benchmark simple models . For example, we can compare the predictive performance of the simple "population vector" model of how neural activity relates to movement (Georgopoulos, Schwartz, and Kettner 1986) to a ML benchmark (e.g. an RNN). 4 -ML can serve as a model of the brain . For example, researchers have studied how neurons in the visual pathway correspond to units in an artificial network that is trained to classify images...
Any function can be constructed using a hierarchy of simpler functions through compositions. Such a hierarchy can be characterized by a binary rooted tree. Each node of this tree is associated with a function which takes as inputs two numbers from its children and produces one output. Since thinking about functions in terms of computation graphs is getting popular we may want to know which functions can be implemented on a given tree. Here, we describe a set of necessary constraints in the form of a system of non-linear partial differential equations that must be satisfied. Moreover, we prove that these conditions are sufficient in both contexts of analytic and bit-valued functions. In the latter case, we explicitly enumerate discrete functions and observe that there are relatively few. Our point of view allows us to compare different neural network architectures in regard to their function spaces. Our work connects the structure of computation graphs with the functions they can implement and has potential applications to neuroscience and computer science. 2 R. FARHOODI, K. FILOM, I. JONES, K. KORDING 7. Application to neural networks 42 7.1. From neural networks to trees 42 7.2. Trees with repeated labels; a toy example 43 7.3. Trees with repeated labels; estimates 46 References 49
The process through which neurons are labeled is a key methodological choice in measuring neuron morphology. However, little is known about how this choice may bias measurements. To quantify this bias we compare the extracted morphology of neurons collected from the same rodent species, experimental condition, gender distribution, age distribution, brain region and putative cell type, but obtained with 19 distinct staining methods. We found strong biases on measured features of morphology. These were largest in features related to the coverage of the dendritic tree (e.g., the total dendritic tree length). Understanding measurement biases is crucial for interpreting morphological data.
The intricate morphology of neurons has fascinated since the dawn of neuroscience, and yet, it is hard to synthesize them. Current algorithms typically define a growth process with parameters that allow matching aspects of the morphologies. However, such algorithmic growth processes are far simpler than the biological ones. What is needed is an algorithm that, given a database of morphologies, produces more of those. Here, we introduce a generator for neuron morphologies that is based on a statistical sampling process. Our Reversible Jump Markov chain Monte Carlo (RJMCMC) method starts with a trivial neuron and iteratively perturbs the morphology bringing the features close to those of the database. By quantifying the statistics of the generated neurons, we find that it outperforms growth-based models for many features. Good generative models for neuron morphologies promise to be important both for neural simulations and for morphology reconstructions from imaging data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.