2020
DOI: 10.1007/978-3-030-62144-5_2
|View full text |Cite
|
Sign up to set email alerts
|

Can Attention Masks Improve Adversarial Robustness?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…This approach reduces the dimension of the input space, therefore reducing the space of the potential adversarial examples to fool the network (Simon-Gabriel et al, 2019 ). Several works have demonstrated that elimination of the background of inputs using hand-designed attention masks prior to classification can improve robustness (Vaishnavi et al, 2020 ). In contrast, the HTDA model can automatically generate attention maps based on the extracted features without manual annotations.…”
Section: Resultsmentioning
confidence: 99%
“…This approach reduces the dimension of the input space, therefore reducing the space of the potential adversarial examples to fool the network (Simon-Gabriel et al, 2019 ). Several works have demonstrated that elimination of the background of inputs using hand-designed attention masks prior to classification can improve robustness (Vaishnavi et al, 2020 ). In contrast, the HTDA model can automatically generate attention maps based on the extracted features without manual annotations.…”
Section: Resultsmentioning
confidence: 99%
“…Training with augmented data also degrades neural network performance compared to training with original dataset. There are many other defense methods, such as attention-based method [19], [20], [21] and regularization methods [22], [23], [24], [25]. The defense methods make changes to the neural network, but do not guarantee generality.…”
Section: Related Workmentioning
confidence: 99%
“…In general, a certificate is a trainable function that is optimized at training time to ensure that the decision boundary of the classifier is guaranteed not to change within a perturbation radius. Other than limitations intrinsic to the nature of certified classifiers, such as a trade-off between certified radius and accuracy and training complexity (Vaishnavi et al, 2022), it is important to mention that in general, these techniques require large amount of data (e.g. randomized smoothing (Cohen et al, 2019)) replacing the channel with a certified one.…”
Section: Related Workmentioning
confidence: 99%