2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) 2018
DOI: 10.1109/dac.2018.8465773
|View full text |Cite
|
Sign up to set email alerts
|

Reverse Engineering Convolutional Neural Networks Through Side-channel Information Leaks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
154
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(155 citation statements)
references
References 10 publications
1
154
0
Order By: Relevance
“…The proposed work can effective get the key by only analysing 32,500 number of plaintexts. In [314], studies the robustness of CNN on various side channel information leaks.…”
Section: F Deep Learning In Side Channel Attacks Detectionmentioning
confidence: 99%
“…The proposed work can effective get the key by only analysing 32,500 number of plaintexts. In [314], studies the robustness of CNN on various side channel information leaks.…”
Section: F Deep Learning In Side Channel Attacks Detectionmentioning
confidence: 99%
“…In [5], attackers fool DNNs to make mistakes by modifying their parameters through fault injection, in which single bias attack modifies one parameter with a large perturbation for misclassification and gradient descent attack considers stealthiness by adding small perturbations on a number of parameters. Reverse-engineering attacks [6] can identify model parameters in the off-chip memory, which may be stealthily replaced by attackers. [15] performs practical laser fault injection on activation functions of DNNs using a nearinfrared diode plus laser.…”
Section: B Related Work and Motivationmentioning
confidence: 99%
“…Liu et al [5] first proposes to attack DNN parameters for misclassifications based on two fault injection methods: single bias attack and gradient descent attack. Reverse-engineering attacks [6], [7] on hardware DNN accelerators can identify the model parameters in the off-chip memory and then attackers may stealthily substitute original parameters with malicious ones. These attacks seriously threat safety-critical applications based on DNNs.…”
Section: Introductionmentioning
confidence: 99%
“…Tramer et al [37] demonstrated a model inversion attack by exploiting the relationship of queries and confidence values on different machine learning models, such as DNN, logistic regressions, etc. Despite the attacks exploiting the privacy leakage in the training sets, Hua et al [11] presented a novel attack to reverse engineer the underlying network information. They utilize the memory accessing patterns to infer the network structures, such as number of layers, the feature map sizes of each layer.…”
Section: Related Workmentioning
confidence: 99%