2019
DOI: 10.1109/mm.2019.2948345
|View full text |Cite
|
Sign up to set email alerts
|

ΔNN: Power-efficient Neural Network Acceleration using Differential Weights

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…Weights [57], [61], [133], [153], [241] Activations [134], [237] Computation reuse and memoization Partial [133], [134], [161], [237], [239], [241], [242] Full [188], [238], [240] Computation reduction with early termination [243]- [246] instance, configurable communication network allows PEs to execute in dataflow fashion; PEs can request for partially refilling their buffers with the new data. EyerissV2 [63] proposed a hierarchical mesh interconnect with configurable router nodes which allow configuring the router for communicating the data between the source (e.g., shared memory) and destination (e.g., PEs) ports via broadcast/multicast/unicast.…”
Section: Value Similaritymentioning
confidence: 99%
“…Weights [57], [61], [133], [153], [241] Activations [134], [237] Computation reuse and memoization Partial [133], [134], [161], [237], [239], [241], [242] Full [188], [238], [240] Computation reduction with early termination [243]- [246] instance, configurable communication network allows PEs to execute in dataflow fashion; PEs can request for partially refilling their buffers with the new data. EyerissV2 [63] proposed a hierarchical mesh interconnect with configurable router nodes which allow configuring the router for communicating the data between the source (e.g., shared memory) and destination (e.g., PEs) ports via broadcast/multicast/unicast.…”
Section: Value Similaritymentioning
confidence: 99%