The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation.
The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory is recently proposed as a theoretical framework for sequence learning in the cortex. In this paper, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variable-order temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods: autoregressive integrated moving average (ARIMA), feedforward neural networks: online sequential extreme learning machine (ELM), and recurrent neural networks: long short-term memory (LSTM) and echo-state networks (ESN), on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameters tuning. Therefore the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem, but is also applicable to a wide range of real-world problems such as discrete and continuous sequence prediction, anomaly detection, and sequence classification.
Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed.
Neuronal selectivity results from both excitatory and suppressive inputs to a given neuron. Suppressive influences can often significantly modulate neuronal responses and impart novel selectivity in the context of behaviorally relevant stimuli. In this work, we use a naturalistic optic flow stimulus to explore the responses of neurons in the middle temporal area (MT) of the alert macaque monkey; these responses are interpreted using a hierarchical model that incorporates relevant nonlinear properties of upstream processing in the primary visual cortex (V1). In this stimulus context, MT neuron responses can be predicted from distinct excitatory and suppressive components. Excitation is spatially localized and matches the measured preferred direction of each neuron. Suppression is typically composed of two distinct components: (1) a directionally untuned component, which appears to play the role of surround suppression and normalization; and (2) a direction-selective component, with comparable tuning width as excitation and a distinct spatial footprint that is usually partially overlapping with excitation. The direction preference of this direction-tuned suppression varies widely across MT neurons: approximately one-third have overlapping suppression in the opposite direction as excitation, and many other neurons have suppression with similar direction preferences to excitation. There is also a population of MT neurons with orthogonally oriented suppression. We demonstrate that direction-selective suppression can impart selectivity of MT neurons to more complex velocity fields and that it can be used for improved estimation of the three-dimensional velocity of moving objects. Thus, considering MT neurons in a complex stimulus context reveals a diverse set of computations likely relevant for visual processing in natural visual contexts.
The responses of sensory neurons can be quite different to repeated presentations of the same stimulus. Here, we demonstrate a direct link between the trial-to-trial variability of cortical neuron responses and network activity that is reflected in local field potentials (LFPs). Spikes and LFPs were recorded with a multielectrode array from the middle temporal (MT) area of the visual cortex of macaques during the presentation of continuous optic flow stimuli. A maximum likelihood-based modeling framework was used to predict single-neuron spiking responses using the stimulus, the LFPs, and the activity of other recorded neurons. MT neuron responses were strongly linked to gamma oscillations (maximum at 40 Hz) as well as to lower-frequency delta oscillations (1-4 Hz), with consistent phase preferences across neurons. The predicted modulation associated with the LFP was largely complementary to that driven by visual stimulation, as well as the activity of other neurons, and accounted for nearly half of the trial-to-trial variability in the spiking responses. Moreover, the LFP model predictions accurately captured the temporal structure of noise correlations between pairs of simultaneously recorded neurons, and explained the variation in correlation magnitudes observed across the population. These results therefore identify signatures of network activity related to the variability of cortical neuron responses, and suggest their central role in sensory cortical function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.