2018
DOI: 10.1088/1361-6463/aae641
|View full text |Cite
|
Sign up to set email alerts
|

The impact of on-chip communication on memory technologies for neuromorphic systems

Abstract: Emergent nanoscale non-volatile memory technologies with high integration density offer a promising solution to overcome the scalability limitations of CMOS-based neural networks architectures, by efficiently exhibiting the key principle of neural computation. Despite the potential improvements in computational costs, designing high-performance on-chip communication networks that support flexible, large-fanout connectivity remains as daunting task. In this paper, we elaborate on the communication requirements … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 76 publications
0
12
0
Order By: Relevance
“…The energy estimates were obtained by dividing the total energy consumption by computation, routing, and static energy, and then calculating each energy proportionally to the number of spikes, Figure 5: Firing rate-regularity graph for various neural coding schemes spiking density, and latency, respectively. We referred to [6,7,26] for the energy consumption ratios of the three parts (computation, routing, and static), and the estimated values were normalized for each dataset. As a result of our estimation, our method consumed the least energy in almost all cases with higher accuracy.…”
Section: Comparison With Other Deep Snn Algorithmsmentioning
confidence: 99%
“…The energy estimates were obtained by dividing the total energy consumption by computation, routing, and static energy, and then calculating each energy proportionally to the number of spikes, Figure 5: Firing rate-regularity graph for various neural coding schemes spiking density, and latency, respectively. We referred to [6,7,26] for the energy consumption ratios of the three parts (computation, routing, and static), and the estimated values were normalized for each dataset. As a result of our estimation, our method consumed the least energy in almost all cases with higher accuracy.…”
Section: Comparison With Other Deep Snn Algorithmsmentioning
confidence: 99%
“…How spike trains encode information is among the most important questions in neuroscience [730] . Both, spike timing and spike frequency have been proposed as modes of information transfer in biological brains [856] and both can codes can be investigated in oxide-based synaptors, neuristors and arrays [857] , [858] , [859] , [860] . Owing to its enhanced energy efficiency, spike timing appears to be a preponderant code in biological brains while frequency codes are believed to be exponentially more costly [861] , [862] , [863] .…”
Section: Neural Codesmentioning
confidence: 99%
“…What is worse, bus arbitration, addressing and latency prolong the transfer time (and decrease the system's efficacy). This type of communicational burst may easily lead to a ''communicational collapse'' [31], but it may also produce unintentional ''neuronal avalanches'' [5].…”
Section: Using High-speed Bus(es)mentioning
confidence: 99%