Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher-order interactions among large groups of elements have an important role. Here we show, in the vertebrate retina, that weak correlations between pairs of neurons coexist with strongly collective behaviour in the responses of ten or more neurons. We find that this collective behaviour is described quantitatively by models that capture the observed pairwise correlations but assume no higher-order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behaviour. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons.
Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such “K-pairwise” models—being systematic extensions of the previously used pairwise Ising models—provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
A key issue in understanding the neural code for an ensemble of neurons is the nature and strength of correlations between neurons and how these correlations are related to the stimulus. The issue is complicated by the fact that there is not a single notion of independence or lack of correlation. We distinguish three kinds: (1) activity independence; (2) conditional independence; and (3) information independence. Each notion is related to an information measure: the information between cells, the information between cells given the stimulus, and the synergy of cells about the stimulus, respectively. We show that these measures form an interrelated framework for evaluating contributions of signal and noise correlations to the joint information conveyed about the stimulus and that at least two of the three measures must be calculated to characterize a population code. This framework is compared with others recently proposed in the literature. In addition, we distinguish questions about how information is encoded by a population of neurons from how that information can be decoded. Although information theory is natural and powerful for questions of encoding, it is not sufficient for characterizing the process of decoding. Decoding fundamentally requires an error measure that quantifies the importance of the deviations of estimated stimuli from actual stimuli. Because there is no a priori choice of error measure, questions about decoding cannot be put on the same level of generality as for encoding.
Entropy and information provide natural measures of correlation among elements in a network. We construct here the information theoretic analog of connected correlation functions: irreducible N -point correlation is measured by a decrease in entropy for the joint distribution of N variables relative to the maximum entropy allowed by all the observed N − 1 variable distributions. We calculate the "connected information" terms for several examples, and show that it also enables the decomposition of the information that is carried by a population of elements about an outside source. Keywords: entropy, information, multi-information, redundancy, synergy, correlation, network In statistical physics and field theory, the nature of order in a system is characterized by correlation functions. These ideas are especially powerful because there is a direct relation between the correlation functions and experimental observables such as scattering cross sections and susceptibilities. As we move toward the analysis of more complex systems, such as the interactions among genes or neurons in a network, it is not obvious how to construct correlation functions which capture the underlying order. On the other hand it is possible to observe directly the activity of many single neurons in a network or the expression levels of many genes, and hence real experiments in these systems are more like Monte Carlo simulations, sampling the distribution of network states.Shannon proved that, given a probability distribution over a set of variables, entropy is the unique measure of what can be learned by observing these variables, given certain simple and plausible criteria (continuity, monotonicity and additivity) [1]. By the same arguments, mutual information arises as the unique measure of the interdependence of two variables, or two sets of variables. Defining information theoretic analogs of higher order correlations has proved to be more difficult [2,3,4,5,6,7,8,9, 10]. When we compute N -point correlation functions in statistical physics and field theory, we are careful to isolate the connected correlations, which are the components of the N -point correlation that cannot be factored into correlations among groups of fewer than N observables. We propose here an analogous measure of "connected information" which generalizes precisely our intuition about connectedness and interactions from field theory; a closely related discussion for quantum information has been given recently [11].Consider N variables {x i }, i = 1, 2, ..., N , drawn from the joint probability distribution P ({x i }); this has an entropy [12]. S({x(The fact that N variables are correlated means that the entropy S({x i }) is smaller than the sum of the entropies for each variable individually,The total difference in entropy between the interacting variables and the variables taken independently can be written as [2,3]
The firing reliability and precision of an isopotential membrane patch consisting of a realistically large number of ion channels is investigated using a stochastic Hodgkin-Huxley (HH) model. In sharp contrast to the deterministic HH model, the biophysically inspired stochastic model reproduces qualitatively the different reliability and precision characteristics of spike firing in response to DC and fluctuating current input in neocortical neurons, as reported by Mainen & Sejnowski (1995). For DC inputs, spike timing is highly unreliable; the reliability and precision are significantly increased for fluctuating current input. This behavior is critically determined by the relatively small number of excitable channels that are opened near threshold for spike firing rather than by the total number of channels that exist in the membrane patch. Channel fluctuations, together with the inherent bistability in the HH equations, give rise to three additional experimentally observed phenomena: subthreshold oscillations in the membrane voltage for DC input, "spontaneous" spikes for subthreshold inputs, and "missing" spikes for suprathreshold inputs. We suggest that the noise inherent in the operation of ion channels enables neurons to act as "smart" encoders. Slowly varying, uncorrelated inputs are coded with low reliability and accuracy and, hence, the information about such inputs is encoded almost exclusively by the spike rate. On the other hand, correlated presynaptic activity produces sharp fluctuations in the input to the postsynaptic cell, which are then encoded with high reliability and accuracy. In this case, information about the input exists in the exact timing of the spikes. We conclude that channel stochasticity should be considered in realistic models of neurons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.