Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3511985
|View full text |Cite
|
Sign up to set email alerts
|

DREW: Efficient Winograd CNN Inference with Deep Reuse

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Xie et al [47] proposed to build an efficient two-phase LSM tree as a lookup table by leveraging the capacity advantages of Intel Optane Persistent Memory to speed up the Molecular Dynamic (MD) simulation. Some studies [29,45,28] empirically proved that a similar computation exists in CNN training and inference and proposed to project similar CNN computation results into buckets by locality-sensitive hashing (LSH) to speed up the CNN training and inference.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Xie et al [47] proposed to build an efficient two-phase LSM tree as a lookup table by leveraging the capacity advantages of Intel Optane Persistent Memory to speed up the Molecular Dynamic (MD) simulation. Some studies [29,45,28] empirically proved that a similar computation exists in CNN training and inference and proposed to project similar CNN computation results into buckets by locality-sensitive hashing (LSH) to speed up the CNN training and inference.…”
Section: Related Workmentioning
confidence: 99%
“…Similarity in DNNs Prior works in [29,28,45,35] report that high similarity exists in convolution computation in a single image data or RNN neuron activation in a consecutive time step. Cao et al [4] proposed a decoupled transformer architecture dedicated to Question-Answering (QA) tasks.…”
Section: Related Workmentioning
confidence: 99%