2019
DOI: 10.48550/arxiv.1906.00001
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Functional Adversarial Attacks

Cassidy Laidlaw,
Soheil Feizi

Abstract: We propose functional adversarial attacks, a novel class of threat models for crafting adversarial examples to fool machine learning models. Unlike a standard p -ball threat model, a functional adversarial threat model allows only a single function to be used to perturb input features to produce an adversarial example. For example, a functional adversarial attack applied on colors of an image can change all red pixels simultaneously to light red. Such global uniform changes in images can be less perceptible th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 15 publications
0
12
0
Order By: Relevance
“…One way to achieve this is by applying subtle geometric transformations such as spatial transformations [37,1], translations and rotations [14] or pose changes [2] to the inputs. Other works consider recoloring [19,25,4], intermediate features [13,26,41] and inserting new objects or patches in the image [6]. A challenge for creating unrestricted adversarial examples and defending against them is introduced in [5] using the simple task of classifying between birds and bicycles.…”
Section: Adversarial Examplesmentioning
confidence: 99%
See 3 more Smart Citations
“…One way to achieve this is by applying subtle geometric transformations such as spatial transformations [37,1], translations and rotations [14] or pose changes [2] to the inputs. Other works consider recoloring [19,25,4], intermediate features [13,26,41] and inserting new objects or patches in the image [6]. A challenge for creating unrestricted adversarial examples and defending against them is introduced in [5] using the simple task of classifying between birds and bicycles.…”
Section: Adversarial Examplesmentioning
confidence: 99%
“…We also evaluate the adversarially trained model against various unforeseen attacks to demonstrate generalizable robustness of the model. We consider several attacks including recoloring [19,25], spatial transformations [37], perceptual [26] and additive perturbations [28]. Results are shown in Table 2, and are compared against other defense methods such as Adversarial Training with PGD (AT PGD) [28], AT Spatial [37], AT Recolor [25], PAT [26] and AT Ad-vProp [38].…”
Section: Low-level Mid-levelmentioning
confidence: 99%
See 2 more Smart Citations
“…Many of the recent studies have explored the semantic attacks. Semantic attacks are powerful for attacking defenses (Engstrom et al, 2017;Hosseini & Poovendran, 2018;Laidlaw & Feizi, 2019). Many of semantic attacks are applicable to Imagenet, however, none of them consider increasing the radii of the certificates generated by the certifiable defenses.…”
Section: A Appendixmentioning
confidence: 99%