2020
DOI: 10.48550/arxiv.2008.08384
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training

Abstract: Despite their performance, Artificial Neural Networks are not reliable enough for most of industrial applications. They are sensitive to noises, rotations, blurs and adversarial examples. There is a need to build defenses that protect against a wide range of perturbations, covering the most traditional common corruptions and adversarial examples. We propose a new data augmentation strategy called M-TLAT and designed to address robustness in a broad sense. Our approach combines the Mixup augmentation and a new … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…To defend against adaptive attacks, several mixup-based adversarial training techniques have been proposed (Lamb et al, 2019;Laugros et al, 2020). Interpolated adversarial training (IAT) (Lamb et al, 2019) trains on interpolations of adversarial data along with interpolations of natural data.…”
Section: Mixup For Robustnessmentioning
confidence: 99%
See 1 more Smart Citation
“…To defend against adaptive attacks, several mixup-based adversarial training techniques have been proposed (Lamb et al, 2019;Laugros et al, 2020). Interpolated adversarial training (IAT) (Lamb et al, 2019) trains on interpolations of adversarial data along with interpolations of natural data.…”
Section: Mixup For Robustnessmentioning
confidence: 99%
“…Interpolated adversarial training (IAT) (Lamb et al, 2019) trains on interpolations of adversarial data along with interpolations of natural data. Mixup with targeted labeling adversarial training (M-TLAT) (Laugros et al, 2020) combines vanilla mixup with targeted labeling to enhance AT. However, these methods behave linearly when employing vanilla mixup in AT, and such linear behaviors will damage the adversarial robustness (see Section 3.1 for details).…”
Section: Mixup For Robustnessmentioning
confidence: 99%