Multineuron firing patterns are often observed, yet are predicted to be rare by models that assume independent firing. To explain these correlated network states, two groups recently applied a second-order maximum entropy model that used only observed firing rates and pairwise interactions as parameters (Schneidman et al., 2006; Shlens et al., 2006). Interestingly, with these minimal assumptions they predicted 90 -99% of network correlations. If generally applicable, this approach could vastly simplify analyses of complex networks. However, this initial work was done largely on retinal tissue, and its applicability to cortical circuits is mostly unknown. This work also did not address the temporal evolution of correlated states. To investigate these issues, we applied the model to multielectrode data containing spontaneous spikes or local field potentials from cortical slices and cultures. The model worked slightly less well in cortex than in retina, accounting for 88 Ϯ 7% (mean Ϯ SD) of network correlations. In addition, in 8 of 13 preparations, the observed sequences of correlated states were significantly longer than predicted by concatenating states from the model. This suggested that temporal dependencies are a common feature of cortical network activity, and should be considered in future models. We found a significant relationship between strong pairwise temporal correlations and observed sequence length, suggesting that pairwise temporal correlations may allow the model to be extended into the temporal domain. We conclude that although a second-order maximum entropy model successfully predicts correlated states in cortical networks, it should be extended to account for temporal correlations observed between states.
Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need OPEN ACCESSEntropy 2010, 12 90 to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research.
As suggested by recent experimental evidence, a spontaneously active neural system that is capable of continual learning should also be capable of homeostasis of both activity and connectivity. The connectivity appears to be maintained at a level that is optimal for information transmission and storage. We present a simple stochastic computational Hebbian learning model that incorporates homeostasis of both activity and connectivity, and we explore its stability and connectivity properties. We find that homeostasis of activity and connectivity imposes structural and dynamic constraints on the behavior of the system. For instance, the connectivity pattern is sparse and activation patterns are scale-free. Additionally, homeostasis of connectivity must occur on a timescale faster than homeostasis of activity. We demonstrate the clinical relevance of these constraints by simulating a prolonged seizure and acute deafferentation. Based on our simulations, we predict that in both the post-seizure and post-deafferentation states, the system is over-connected and, hence, epileptogenic. We further predict that interventions that boost spontaneous activity should be protective against epileptogenesis, while interventions that boost stimulated or connectivity-related activity are pro-epileptogenic.
BackgroundHow living neural networks retain information is still incompletely understood. Two prominent ideas on this topic have developed in parallel, but have remained somewhat unconnected. The first of these, the "synaptic hypothesis," holds that information can be retained in synaptic connection strengths, or weights, between neurons. Recent work inspired by statistical mechanics has suggested that networks will retain the most information when their weights are distributed in a skewed manner, with many weak weights and only a few strong ones. The second of these ideas is that information can be represented by stable activity patterns. Multineuron recordings have shown that sequences of neural activity distributed over many neurons are repeated above chance levels when animals perform well-learned tasks. Although these two ideas are compelling, no one to our knowledge has yet linked the predicted optimum distribution of weights to stable activity patterns actually observed in living neural networks.ResultsHere, we explore this link by comparing stable activity patterns from cortical slice networks recorded with multielectrode arrays to stable patterns produced by a model with a tunable weight distribution. This model was previously shown to capture central features of the dynamics in these slice networks, including neuronal avalanche cascades. We find that when the model weight distribution is appropriately skewed, it correctly matches the distribution of repeating patterns observed in the data. In addition, this same distribution of weights maximizes the capacity of the network model to retain stable activity patterns. Thus, the distribution that best fits the data is also the distribution that maximizes the number of stable patterns.ConclusionsWe conclude that local cortical networks are very likely to use a highly skewed weight distribution to optimize information retention, as predicted by theory. Fixed distributions impose constraints on learning, however. The network must have mechanisms for preserving the overall weight distribution while allowing individual connection strengths to change with learning.
The average cortical neuron makes and receives about 1,000 -10,000 synaptic contacts. This anatomical information suggests that local cortical networks are connected in a fairly democratic manner, with all neurons having about the same number of incoming and outgoing connections. But the physical connections found in the cortex do not necessarily reveal how information flows through cortical networks. What is the network diagram for information flow in cortical networks? To investigate this issue, we recorded spontaneous spiking activity at 20 kHz for over 1 hr from organotypic cortex cultures [1] placed on a high-density 512 electrode array with 60 μm interelectrode distance. The high-density array increased the chances that we would record from synaptically connected neurons and allowed us to obtain stable long-term recordings that were essential for accurate estimates of entropy rates [2] and information flow. To measure information flow, we used a new method called transfer entropy [3] that has been shown to accurately identify connections in model networks. Our initial "democratic" hypothesis was that network diagrams of information flow would show all neurons to have approximately equal amounts of incoming and outgoing information flow. Surprisingly, our analysis revealed wide differences in the amount of information flowing into and out of different neurons in the network, indicating that information flow is not "democratically" distributed [4]. These data point to the existence of cells with high information flow that act as highly central hub nodes in the network. Future work combining experiments and simulations will be directed at exploring why local cortical networks assume such nondemocratic information flow patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.