2021 IEEE European Symposium on Security and Privacy (EuroS&P) 2021
DOI: 10.1109/eurosp51992.2021.00024
|View full text |Cite
|
Sign up to set email alerts
|

Sponge Examples: Energy-Latency Attacks on Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
42
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 63 publications
(42 citation statements)
references
References 52 publications
0
42
0
Order By: Relevance
“…However, the exposure of the model's predictions represents a significant risk as an adversary can leverage this information to steal the model's knowledge [26,41,7,32,31,11,28,19]. The threat of such model extraction attacks is two-fold: adversaries may use the stolen model for monetary gains or as a reconnaissance step to mount further attacks [33,37].…”
Section: Introductionmentioning
confidence: 99%
“…However, the exposure of the model's predictions represents a significant risk as an adversary can leverage this information to steal the model's knowledge [26,41,7,32,31,11,28,19]. The threat of such model extraction attacks is two-fold: adversaries may use the stolen model for monetary gains or as a reconnaissance step to mount further attacks [33,37].…”
Section: Introductionmentioning
confidence: 99%
“…Availability-based attacks have only recently gained the attention of researchers, despite the fact that a system's availability is a security-critical aspect of many applications. Shumailov et al [20] were the first to present an attack (called sponge examples) targeting the availability of computer vision and NLP models. In their paper, they demonstrated that adversarial examples are capable of doubling the inference time of NLP transformer-based models, with inference times ×6000 greater than regular input.…”
Section: Availability-based Attacksmentioning
confidence: 99%
“…In this paper, we present NMS-Sponge, the first availability attack against the end-to-end object detection pipeline, which is performed by applying a universal adversarial perturbation (UAP). Our initial attempt to apply the sponge attack proposed by Shumailov et al [20] on YOLO to decelerate inference processing was unsuccessful. This is due to the fact that for most of the images, the vast majority of the model's activation values are different than zero by default.…”
Section: Introductionmentioning
confidence: 99%
“…There are also poisoning attacks targeting DNNs' model availability. Shumailov et al [15] propose an attack that generates sponge examples, which significantly increase the energy consumption during model training. This leads to a longer run time during model inference, which negatively affects the model availability.…”
Section: Deep Learningmentioning
confidence: 99%