2022
DOI: 10.1109/tdsc.2021.3126315
|View full text |Cite
|
Sign up to set email alerts
|

Model Protection: Real-Time Privacy-Preserving Inference Service for Model Privacy at the Edge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…Different from the above works, we foucs on reducing the SLO violation rate caused by the interference of multimodel. In addition, some edge inference frameworks involve privacy protection [21], [22] and edge-cloud collaborative [23], [24], respectively. These works are orthogonal to BCEdge that can alleviate privacy and resource constraints.…”
Section: A Model-level Dnn Inference Servicementioning
confidence: 99%
“…Different from the above works, we foucs on reducing the SLO violation rate caused by the interference of multimodel. In addition, some edge inference frameworks involve privacy protection [21], [22] and edge-cloud collaborative [23], [24], respectively. These works are orthogonal to BCEdge that can alleviate privacy and resource constraints.…”
Section: A Model-level Dnn Inference Servicementioning
confidence: 99%
“…A third approach reported in the literature proposes to delegate the computation of critical neurons to the TEE. Hou et al [17] propose a framework to design a secure version of the ML model with crafted random weights. More specifically, the framework adds crafted values to obfuscate strategical weights within the ANN.…”
Section: Selected Neuronsmentioning
confidence: 99%
“…Works that delegate the computation of selected neurons to the TEE are the most efficient in terms of TEE memory footprint. The work [17] only requires space to save the selected neurons and the output of the layer under process. Despite performing most of the computation outside the TEE, this work still requires the outputs of each layer to be denoised within the TEE, which significantly increases the number of context switches compared to our framework.…”
Section: Gap Analysismentioning
confidence: 99%
See 2 more Smart Citations