2018 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2018
DOI: 10.23919/date.2018.8342033
|View full text |Cite
|
Sign up to set email alerts
|

SmartShuttle: Optimizing off-chip memory accesses for deep learning accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
54
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 83 publications
(54 citation statements)
references
References 10 publications
0
54
0
Order By: Relevance
“…Inputs, weights, and outputs are pruned in [19]. Our work targets at general CNN accelerators without data pruning/compression, so the results reported in [10], [19] are not directly comparable to ours. Instead, we try to make an approximate comparison.…”
Section: Convolutional Layer Indexmentioning
confidence: 83%
See 4 more Smart Citations
“…Inputs, weights, and outputs are pruned in [19]. Our work targets at general CNN accelerators without data pruning/compression, so the results reported in [10], [19] are not directly comparable to ours. Instead, we try to make an approximate comparison.…”
Section: Convolutional Layer Indexmentioning
confidence: 83%
“…Rather than using a single dataflow, several studies have integrated multiple dataflows into an accelerator (with increased hardware cost) and selected the best one according to the layer dimensions [19]- [23]. These approaches usually perform better than the approaches based on a single dataflow.…”
Section: B Related Workmentioning
confidence: 99%
See 3 more Smart Citations