2017
DOI: 10.1103/physreve.95.012310
|View full text |Cite
|
Sign up to set email alerts
|

Approximate-master-equation approach for the Kinouchi-Copelli neural model on networks

Abstract: In this work, we use the approximate-master-equation approach to study the dynamics of the Kinouchi-Copelli neural model on various networks. By categorizing each neuron in terms of its state and also the states of its neighbors, we are able to uncover how the coupled system evolves with respective to time by directly solving a set of ordinary differential equations. In particular, we can easily calculate the statistical properties of the time evolution of the network instantaneous response, the network respon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 46 publications
0
10
0
Order By: Relevance
“…Concerning the simulations, we must make a technical observation. In contrast to the static model 24 , the relevant indicator of criticality here is no longer the branching ratio σ , but the principal eigenvalue Λ ≠ σ of the synaptic matrix P ij , with Λ c = 1 48,51 . This occurs because the synaptic dynamics creates correlations in the random neighbour network 26 .…”
Section: Excitable Cellular Automata With Lhg Synapsesmentioning
confidence: 99%
“…Concerning the simulations, we must make a technical observation. In contrast to the static model 24 , the relevant indicator of criticality here is no longer the branching ratio σ , but the principal eigenvalue Λ ≠ σ of the synaptic matrix P ij , with Λ c = 1 48,51 . This occurs because the synaptic dynamics creates correlations in the random neighbour network 26 .…”
Section: Excitable Cellular Automata With Lhg Synapsesmentioning
confidence: 99%
“…[7,[21][22][23][24][25][26][27]). One of the main results is that information processing seems to be optimized at a secondorder absorbing phase transition [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. This transition occurs between no activity (the absorbing phase) and nonzero steadystate activity (the active phase).…”
Section: Introductionmentioning
confidence: 99%
“…For concreteness we employ a general model of excitable networks [17,27,28], adapted to incorporate nodal heterogeneity [29]. Susceptible (quiescent) nodes can be excited either by (global) external driving or by (local) neighbour contributions: External inputs arrive at a steady Poisson rate h. An input rate h indicates that at each time step δt (=1 ms), an external input may arrive with a probability p h = 1 − exp(−hδt).…”
Section: Methodsmentioning
confidence: 99%
“…For concreteness we employ a general model of excitable networks [20,30,31], adapted to incorporate nodal heterogeneity [32]. Susceptible (quiescent) nodes can be excited either by (global) external driving or by (local) neighbour contributions.…”
Section: Methodsmentioning
confidence: 99%