Proceedings of the 34th Annual Computer Security Applications Conference 2018
DOI: 10.1145/3274694.3274696
|View full text |Cite
|
Sign up to set email alerts
|

I Know What You See

Abstract: Deep learning has become the de-facto computational paradigm for various kinds of perception problems, including many privacysensitive applications such as online medical image analysis. No doubt to say, the data privacy of these deep learning systems is a serious concern. Different from previous research focusing on exploiting privacy leakage from deep learning models, in this paper, we present the first attack on the implementation of deep learning models. To be specific, we perform the attack on an FPGA-bas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 130 publications
(17 citation statements)
references
References 27 publications
0
17
0
Order By: Relevance
“…Li et al [27] combine common data samples with exclusive "logos" and train models to predict them as specific labels so that the ownership of the model can be verified by a third party. Jia et al [28] propose entangled watermark embedding to address watermark removal attacks.…”
Section: Centralized Model Ip Protectionmentioning
confidence: 99%
“…Li et al [27] combine common data samples with exclusive "logos" and train models to predict them as specific labels so that the ownership of the model can be verified by a third party. Jia et al [28] propose entangled watermark embedding to address watermark removal attacks.…”
Section: Centralized Model Ip Protectionmentioning
confidence: 99%
“…The secret is transmitted through transiently executed instructions that are destined to be flushed from the pipeline. In most cases, these vulnerabilities rely on timing differences required to access different data values, but other types disclosing gadgets exist [75,50]. As side-channels of this type arise from standard hardware behavior, transient execution vulnerabilities are not only difficult to mitigate, but nearly all current mitigation techniques either introduce a significant amount of overhead or do not protect all vulnerable microarchitectural structures [9].…”
Section: Transient Execution Attacksmentioning
confidence: 99%
“…Finally, Wei et al [164] proposed to use power side-channel attack on an FPGA-based convolutional neural network accelerator, which can successfully recover the input image using the power traces at the inference stage.…”
Section: Feature Estimation Attackmentioning
confidence: 99%