2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) 2020
DOI: 10.1109/micro50266.2020.00031
|View full text |Cite
|
Sign up to set email alerts
|

Ptolemy: Architecture Support for Robust Deep Learning

Abstract: Deep learning is vulnerable to adversarial attacks, where carefully-crafted input perturbations could mislead a well-trained Deep Neural Network (DNN) to produce incorrect results. Adversarial attacks jeopardize the safety, security, and privacy of DNN-enabled systems. Today's countermeasures to adversarial attacks either do not have the capability to detect adversarial samples at inference-time, or introduce prohibitively high overhead to be practical at inference-time.We propose Ptolemy, an algorithm-archite… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
9
1

Relationship

4
6

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 62 publications
0
14
0
Order By: Relevance
“…As the proposed Block-Skim method aims to reduce the input sequence dimension semantic redundancy, it is compatible to these model compression methods focusing on model redundancy theoretically. By designing Block-Skim not to modify the backbone model, our method is generally applicable to these algorithms as well as other model pruning methods (Guo et al 2020;Qiu et al 2019;Gan et al 2020).…”
Section: Inference Speedup Resultsmentioning
confidence: 99%
“…As the proposed Block-Skim method aims to reduce the input sequence dimension semantic redundancy, it is compatible to these model compression methods focusing on model redundancy theoretically. By designing Block-Skim not to modify the backbone model, our method is generally applicable to these algorithms as well as other model pruning methods (Guo et al 2020;Qiu et al 2019;Gan et al 2020).…”
Section: Inference Speedup Resultsmentioning
confidence: 99%
“…With such taxonomy, we further apply analytic experiments to explore the function of each behavior according to distance. We are also looking forward to further analysis the behavior and function of variant patterns with probing task datasets (Conneau et al, 2018) and analytic tools (Qiu et al, 2019;Gan et al, 2020) as our next plan. Besides, there are several recent works focusing on the optimization of over-parameterized MHA mechanism (Michel et al, 2019;Kovaleva et al, 2019;.…”
Section: Discussionmentioning
confidence: 99%
“…HASI does require multiple inference passes, but data reuse optimizations help to alleviate this overhead. HASI noise injection allows for detection to be more easily generalized, as opposed to other methods which require profiling to select appropriate parameters, such as Feature Squeezing [36] and Path Extraction [10], [28].…”
Section: B Robustness In Neural Networkmentioning
confidence: 99%