2020 IEEE Security and Privacy Workshops (SPW) 2020
DOI: 10.1109/spw50608.2020.00024
|View full text |Cite
|
Sign up to set email alerts
|

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

Abstract: Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a specific resolutio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 57 publications
(22 citation statements)
references
References 9 publications
0
22
0
Order By: Relevance
“…Inspired by Ref. [67], which proposes attacks in the data preprocessing phase. Image-scaling attacks [68] combines poisoning-based attack and image-scaling to help hide poisoned details to escape from detection.…”
Section: Clean-label Attackmentioning
confidence: 99%
“…Inspired by Ref. [67], which proposes attacks in the data preprocessing phase. Image-scaling attacks [68] combines poisoning-based attack and image-scaling to help hide poisoned details to escape from detection.…”
Section: Clean-label Attackmentioning
confidence: 99%
“…This makes the detection of the trigger hard in their attack. Quiring et al [14] discovered that the image scaling functions are generally vulnerable when subjected to attacks. Hence, they utilised the image scaling attacks for efficient Trojan injection while keeping the trigger hidden.…”
Section: A Training Data Poisoningmentioning
confidence: 99%
“…The most common of such preprocessing steps is image resizing, an operation which is required due to the necessity of adapting the size of the to-be-analyzed images to the size of the first layer of the neural network. In [46], Quiring et al exploit image scaling preprocessing to hide the triggering pattern into the poisoned images. They do so by applying the so-called camouflage (CF) attack described in [47], whereby it is possible to build an image whose visual content changes dramatically after scaling (see the example reported in [47],…”
Section: + =mentioning
confidence: 99%