2020 57th ACM/IEEE Design Automation Conference (DAC) 2020
DOI: 10.1109/dac18072.2020.9218728
|View full text |Cite
|
Sign up to set email alerts
|

Late Breaking Results: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 1 publication
0
3
0
Order By: Relevance
“…The synaptic weights in an SNN are updated based on local errors instead of global gradients backpropagated through layers, which is considered the key to neuromorphic hardware ( Boybat et al, 2018 ; Stewart et al, 2020 ). Neuromorphic hardware systems have become the core of hardware acceleration and embedded systems ( Vivancos et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The synaptic weights in an SNN are updated based on local errors instead of global gradients backpropagated through layers, which is considered the key to neuromorphic hardware ( Boybat et al, 2018 ; Stewart et al, 2020 ). Neuromorphic hardware systems have become the core of hardware acceleration and embedded systems ( Vivancos et al, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…One is the high-throughout data transmission between off-chip and on-chip memory. Although high-performance memory systems have been proposed to optimize data transmission, the power consumption is irreducible ( Lian et al, 2019 ; Vivancos et al, 2021 ). Dynamic random-access memory (DRAM) is usually used as the off-chip memory.…”
Section: Introductionmentioning
confidence: 99%
“…Fundamentally, compression relies on non-uniformity in the data value distribution statically and temporally. It is instructive to compare and contrast the behavior of floating-point values during training with the fixed-point values commonly used during inference as many techniques have capitalized on the properties of the value stream during inference [36], [39]- [44]. During inference: 1) Zeros are very common especially in models using ReLU.…”
Section: Introductionmentioning
confidence: 99%