2022
DOI: 10.1609/aaai.v36i8.20902
|View full text |Cite
|
Sign up to set email alerts
|

CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets

Abstract: Poisoning attacks are emerging threats to deep neural networks where the adversaries attempt to compromise the models by injecting malicious data points in the clean training data. Poisoning attacks target either the availability or integrity of a model. The availability attack aims to degrade the overall accuracy while the integrity attack causes misclassification only for specific instances without affecting the accuracy of clean data. Although clean-label integrity attacks are proven to be effective in rece… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…Shafahi et al [35] constructed the clean-label poisoning data by feature collision and care-fully designed poisoning samples with high similarity to benign samples in the feature space. Zhao et al [36] proposed a poisoning attack method with high stealthiness against image classification models based on generative adversarial networks. Kurita et al [37] proposed a poisoning sample generation method for pre-trained models, which can cause destroy models dealing with different computer vision tasks.…”
Section: Poisoning Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…Shafahi et al [35] constructed the clean-label poisoning data by feature collision and care-fully designed poisoning samples with high similarity to benign samples in the feature space. Zhao et al [36] proposed a poisoning attack method with high stealthiness against image classification models based on generative adversarial networks. Kurita et al [37] proposed a poisoning sample generation method for pre-trained models, which can cause destroy models dealing with different computer vision tasks.…”
Section: Poisoning Attacksmentioning
confidence: 99%
“…Poisoning Attack Settings: To demonstrate the effectiveness of the proposed defense framework against poisoning attacks, we use different poisoning sample generation strategies, including clean-label attack [35], back-gradient attack [38], generative attack [65], feature selection attack [66], transferable clean-label attack [67], and concealed poisoning attack [68]. For different attack methods, we only consider the untargeted attack scenario, which destroys the prediction results of the target model for all categories of pixels.…”
Section: Experimental Setup and Implementation Detailsmentioning
confidence: 99%
“…Concern over the ease of DNN model theft has motivated researchers to extend these concepts to deep learning. To this end, researchers have leveraged model poisoning and backdoor attacks as a method of embedding the owner's signature into a model (Zhao and Lao 2022;Li, Wang, and Barni 2021). This induces abnormal outputs for specific inputs that can identify the DNN.…”
Section: Dnn Watermarkingmentioning
confidence: 99%
“…Since AoA only modifies the loss function, it can be readily combined with other transferability-enhancing methods to achieve SOTA performance. In study [213], the authors [133] introduces a clean-label approach for the poisoning availability attack, which reveals the intrinsic imperfection of classifiers. Paper [214] highlights how the global reasoning of (scaled) dot-product attention can represent a significant vulnerability when faced with adversarial patch attacks.…”
Section: Black-box Attacksmentioning
confidence: 99%