2018
DOI: 10.1109/jsyst.2017.2728861
|View full text |Cite
|
Sign up to set email alerts
|

Iterative Specification as a Modeling and Simulation Formalism for I/O General Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…DEVS is a system theoretic characterization of discrete event simulations based on abstraction of events and time intervals from continuous data streams [15][16][17]. Such abstractions carry information that can be efficiently employed, not only in simulation, but also in accounting for the real-world constraints that shape cognitive information processes [18,19].…”
Section: Review Of Devs Abstractions For Brain Architecturesmentioning
confidence: 99%
“…DEVS is a system theoretic characterization of discrete event simulations based on abstraction of events and time intervals from continuous data streams [15][16][17]. Such abstractions carry information that can be efficiently employed, not only in simulation, but also in accounting for the real-world constraints that shape cognitive information processes [18,19].…”
Section: Review Of Devs Abstractions For Brain Architecturesmentioning
confidence: 99%
“…However, the spikes must be transmitted between pre and post synaptic neurons, between two time steps. With more advanced techniques such as hybrid parallel computation [19], the spikes are computed in parallel for each neuron thanks to discrete event programming, which allows a large precision, while the exchange at the level of the synapses can be done at a less fine scale in a discretized way [37,3,45,50]. Using hybrid parallel computations, in [24] the authors were able to simulate a network of order 10 6 neurons and 10 10 synapses with parallel supercomputers capable to run in parallel tens of millions of threads.…”
Section: Introductionmentioning
confidence: 99%
“…At each time step, each process corresponding to a different neuron has to wait for the calculations of all the other processes to know what needs to be updated before computing the next step. With time asynchrony, we can leverage discrete-event programming (25)(26)(27)(28) to track the whole system in time by jumps: from one spike in the network to another spike in the network. Since a very small percentage of a brain is firing during a given unit of time (29), the gain we have is tremendous in terms of computations.…”
Section: Introductionmentioning
confidence: 99%
“…However, the spikes must be transmitted between pre and post synaptic neurons, between two time steps. With more advanced techniques such as hybrid parallel computation [19], the spikes are computed in parallel for each neuron thanks to discrete event programming, while the exchange at the level of the synapses can be done at a less fine scale in a discretized way [36, 3, 44, 49]. Nevertheless, in both cases (classical or hybrid parallel computation), the synchronization between neurons is necessary to propagate the spikes at the synapses, even if it is not done at the same time scale.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation