2021
DOI: 10.1007/978-3-030-91424-0_2
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating TEE-Based DNN Inference Using Mean Shift Network Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Another defense method is to build a TEE in the DNN accelerators [17]. However, to achieve accurate predictions, a DNN requires extensive memory and computing resources which does not operate well in TEE enclaves which have restricted memory space [19].…”
Section: Vulnerability and Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Another defense method is to build a TEE in the DNN accelerators [17]. However, to achieve accurate predictions, a DNN requires extensive memory and computing resources which does not operate well in TEE enclaves which have restricted memory space [19].…”
Section: Vulnerability and Motivationmentioning
confidence: 99%
“…Furthermore, to convert the long plaintext message to fixed-size blocks, AES needs some padding data and this can involve extra memory resource overhead. Third, to achieve accurate decision-making, a DNN requires extensive memory and computing resources which does not operate well in a TEE enclave with restricted memory space [19].…”
Section: Introductionmentioning
confidence: 99%