2020
DOI: 10.1007/978-3-030-58536-5_29
|View full text |Cite
|
Sign up to set email alerts
|

Towards Automated Testing and Robustification by Semantic Adversarial Data Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…However, large datasets also contain substantial contextual biases, in background/context, rotation, viewpoint, etc. and have insufficient controls to ensure networks do not exploit trivial correlations in the data [2,60,69,48,47]. For instance, in [60] authors find that changing backgrounds in ImageNet significantly decreases average performance, and that choosing backgrounds in an adversarial manner can lead to misclassifying 87.5% of the images.…”
Section: Related Workmentioning
confidence: 99%
“…However, large datasets also contain substantial contextual biases, in background/context, rotation, viewpoint, etc. and have insufficient controls to ensure networks do not exploit trivial correlations in the data [2,60,69,48,47]. For instance, in [60] authors find that changing backgrounds in ImageNet significantly decreases average performance, and that choosing backgrounds in an adversarial manner can lead to misclassifying 87.5% of the images.…”
Section: Related Workmentioning
confidence: 99%
“…A recent strand of work is concerned with data augmentation for improving invariance against spurious correlations. Shetty et al (2020) propose to train object detectors on generated semantic adversarial data, effectively reducing the texture dependency of their model. Their finding is in line with (Geirhos et al, 2018) that proposes to transfer the style of paintings onto images and use them for data augmentation.…”
Section: Disentangled Representation Learningmentioning
confidence: 99%
“…semantic adversarial examples. These works often rely on synthetic data, using differentiable rendering or other optimization methods to find adversarial images by modifying scene parameters [5,20,21,22,23,24,25,26]. These include a custom differentiable renderer to perturb the camera, lighting, or object mesh vertices [20], and using a neural renderer where light is represented by network activations [21].…”
Section: Semantic Adversarial Attacks and In-distribution Brittlenessmentioning
confidence: 99%