2020
DOI: 10.48550/arxiv.2006.03463
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sponge Examples: Energy-Latency Attacks on Neural Networks

Abstract: The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While this enabled us to train largescale neural networks in datacenters and deploy them on edge devices, the focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully crafted sponge examples, which are inputs designed to ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(13 citation statements)
references
References 43 publications
0
13
0
Order By: Relevance
“…The Integrity of ML systems can be compromised by adversarial examples [Biggio et al 2013;Szegedy et al 2014] and data poisoning attacks [Biggio et al 2012;]. Finally, resource-depletion attacks [Shumailov et al 2020;Hong et al 2020a] can threaten the Availability of ML systems.…”
Section: Data Hub Solutionmentioning
confidence: 99%
“…The Integrity of ML systems can be compromised by adversarial examples [Biggio et al 2013;Szegedy et al 2014] and data poisoning attacks [Biggio et al 2012;]. Finally, resource-depletion attacks [Shumailov et al 2020;Hong et al 2020a] can threaten the Availability of ML systems.…”
Section: Data Hub Solutionmentioning
confidence: 99%
“…Several security threats are studied regarding machine learning focusing on the basic information security triad: confidentiality, integrity, and availability. For instance (Shumailov et al, 2020) present an ML attack targeting the model's availability. According to (He et al, 2019) the main attack categories for integrity are adversarial and poisoning attacks, while for confidentiality, these are model extraction and model inversion.…”
Section: Related Workmentioning
confidence: 99%
“…In model reverse engineering, crafted inputs allow to deduce from a trained model whether dropout was used and other architectural choices [63]. Finally, sponge attacks aim to increase energy consumption of the classifier at test time [72].…”
Section: Adversarial Machine Learningmentioning
confidence: 99%
“…During our interviews, we found evidence that semiautomated fraud on ML systems takes place in the wild. Our findings on mental models allow to tackle these threats by (I) aligning corporate workflows that enable all actors to understand AML threats with minimal effort, (II) developing tools data design training model deployment poisoning [53,67], backdooring [19,36] model stealing [82] model reverse engineering [63] membership inference [71] evasion [23,78] adversarial reprogramming [27] adversarial initialization [32,52] weight perturbations [20] sponge attacks [72] Figure 1: AML threats within the ML pipeline. Each attack is visualized as an arrow pointing from the step controlled to the point where the attack affects the pipeline.…”
Section: Introductionmentioning
confidence: 99%