2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00816
|View full text |Cite
|
Sign up to set email alerts
|

FDA: Feature Disruptive Attack

Abstract: Though Deep Neural Networks (DNN) show excellent performance across various computer vision tasks, several works show their vulnerability to adversarial samples, i.e., image samples with imperceptible noise engineered to manipulate the network's prediction. Adversarial sample generation methods range from simple to complex optimization techniques. Majority of these methods generate adversaries through optimization objectives that are tied to the pre-softmax or softmax output of the network. In this work we, (i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 80 publications
(41 citation statements)
references
References 42 publications
0
41
0
Order By: Relevance
“…Similarly, Mor et al [177] study optimal strategies against generative adversarial attacks. A Feature Disruptive Attack was proposed in [178] that is targeted at disrupting the internal representation of models for the adversarial samples, instead of simply focusing on altering the prediction.…”
Section: G Miscellaneous Attacksmentioning
confidence: 99%
“…Similarly, Mor et al [177] study optimal strategies against generative adversarial attacks. A Feature Disruptive Attack was proposed in [178] that is targeted at disrupting the internal representation of models for the adversarial samples, instead of simply focusing on altering the prediction.…”
Section: G Miscellaneous Attacksmentioning
confidence: 99%
“…More recently, attack methods applicable for tasks other than classification have been developed. For instance, [7] proposed the feature disruptive attack (FDA) method, which attempts to find perturbation from the intermediate features of a given model. These focus on the image classification task, and an in-depth study on vulnerability of image-to-image models for various tasks has not been conducted.…”
Section: Related Workmentioning
confidence: 99%
“…Further instances of the follow-up iterative algorithms are Variance-Reduced I-FGSM (vr-IGSM) [19] and PGD [13] etc. The above-mentioned algorithms and other recent works [20]- [25] compute image-specific adversarial perturbations which appear as insignificant noise to the human eye but completely confuse the models. Moosavi-Dezfooli et al [26] first demonstrated the possibility of fooling deep models simultaneously on a large number of images with Universal Adversarial Perturbations.…”
Section: A Adversarial Attacksmentioning
confidence: 99%