2012
DOI: 10.1016/j.jpdc.2012.01.016
|View full text |Cite
|
Sign up to set email alerts
|

Scalable communications for a million-core neural processing architecture

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 48 publications
0
11
0
Order By: Relevance
“…This work was motivated by the need to understand how the brain regulates itself to cope with injury. Exploiting the biological adaptive/repair mechanisms of the brain (Stevens, 2008 ) would provide a novel approach to fault tolerant computing, which goes beyond existing capabilities where reliable computations could then be realized using neural networks (Patterson et al, 2012 ), instead of traditional von Neumann computing architectures. Neural networks offer a fine-grained distributed computing architecture that captures to some degree high levels of parallel processing in the brain.…”
Section: Discussionmentioning
confidence: 99%
“…This work was motivated by the need to understand how the brain regulates itself to cope with injury. Exploiting the biological adaptive/repair mechanisms of the brain (Stevens, 2008 ) would provide a novel approach to fault tolerant computing, which goes beyond existing capabilities where reliable computations could then be realized using neural networks (Patterson et al, 2012 ), instead of traditional von Neumann computing architectures. Neural networks offer a fine-grained distributed computing architecture that captures to some degree high levels of parallel processing in the brain.…”
Section: Discussionmentioning
confidence: 99%
“…Moore et al [18] demonstrate that a large-scale neural simulation is communication-rather than compute-bound, and, in the design of the SpiNNaker CMP, particular emphasis is placed on the communication mechanisms employed [19].…”
Section: B Communicationsmentioning
confidence: 99%
“…GPUs to access memory [3] and maintain process coherency [36], while keeping power consumption lower than on such systems. Each chip can be connected to 6 adjacent neighbours in a toroidal mesh using bi-directional asynchronous links for an aggregate spiking bandwidth of 1.5 Gb/s [37], supporting reconfigurable arbitrary connectivity [16]. Arguably a MC mesh NoC is the most suitable interconnect architecture for reconfigurable neural network implementations [46].…”
Section: Architecturementioning
confidence: 99%