The brain exhibits complex spatio-temporal patterns of activity. This phenomenon is governed by an interplay between the internal neural dynamics of cortical areas and their connectivity. Uncovering this complex relationship has raised much interest, both for theory and the interpretation of experimental data (e.g., fMRI recordings) using dynamical models. Here we focus on the so-called inverse problem: the inference of network parameters in a cortical model to reproduce empirically observed activity. Although it has received a lot of interest, recovering directed connectivity for large networks has been rather unsuccessful so far. The present study specifically addresses this point for a noise-diffusion network model. We develop a Lyapunov optimization that iteratively tunes the network connectivity in order to reproduce second-order moments of the node activity, or functional connectivity. We show theoretically and numerically that the use of covariances with both zero and non-zero time shifts is the key to infer directed connectivity. The first main theoretical finding is that an accurate estimation of the underlying network connectivity requires that the time shift for covariances is matched with the time constant of the dynamical system. In addition to the network connectivity, we also adjust the intrinsic noise received by each network node. The framework is applied to experimental fMRI data recorded for subjects at rest. Diffusion-weighted MRI data provide an estimate of anatomical connections, which is incorporated to constrain the cortical model. The empirical covariance structure is reproduced faithfully, especially its temporal component (i.e., time-shifted covariances) in addition to the spatial component that is usually the focus of studies. We find that the cortical interactions, referred to as effective connectivity, in the tuned model are not reciprocal. In particular, hubs are either receptors or feeders: they do not exhibit both strong incoming and outgoing connections. Our results sets a quantitative ground to explore the propagation of activity in the cortex.
Spike-timing-dependent plasticity (STDP) determines the evolution of the synaptic weights according to their pre- and post-synaptic activity, which in turn changes the neuronal activity. In this paper, we extend previous studies of input selectivity induced by (STDP) for single neurons to the biologically interesting case of a neuronal network with fixed recurrent connections and plastic connections from external pools of input neurons. We use a theoretical framework based on the Poisson neuron model to analytically describe the network dynamics (firing rates and spike-time correlations) and thus the evolution of the synaptic weights. This framework incorporates the time course of the post-synaptic potentials and synaptic delays. Our analysis focuses on the asymptotic states of a network stimulated by two homogeneous pools of "steady" inputs, namely Poisson spike trains which have fixed firing rates and spike-time correlations. The (STDP) model extends rate-based learning in that it can implement, at the same time, both a stabilization of the individual neuron firing rates and a slower weight specialization depending on the input spike-time correlations. When one input pathway has stronger within-pool correlations, the resulting synaptic dynamics induced by (STDP) are shown to be similar to those arising in the case of a purely feed-forward network: the weights from the more correlated inputs are potentiated at the expense of the remaining input connections.
The dynamics of the learning equation, which describes the evolution of the synaptic weights, is derived in the situation where the network contains recurrent connections. The derivation is carried out for the Poisson neuron model. The spiking-rates of the recurrently connected neurons and their cross-correlations are determined self- consistently as a function of the external synaptic inputs. The solution of the learning equation is illustrated by the analysis of the particular case in which there is no external synaptic input. The general learning equation and the fixed-point structure of its solutions is discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.