2021
DOI: 10.1109/tifs.2021.3080522
|View full text |Cite
|
Sign up to set email alerts
|

De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 74 publications
(36 citation statements)
references
References 14 publications
0
36
0
Order By: Relevance
“…It is important to highlight the attacker's capability in other scenarios and the assumptions imposed by the attacker, being sometimes to optimistic while in other scenarios represent more realistic conditions [71]. In the literature, there are many examples related to assumptions in regards to the capability of the attacker.…”
Section: A Discussion Archivesmentioning
confidence: 99%
See 1 more Smart Citation
“…It is important to highlight the attacker's capability in other scenarios and the assumptions imposed by the attacker, being sometimes to optimistic while in other scenarios represent more realistic conditions [71]. In the literature, there are many examples related to assumptions in regards to the capability of the attacker.…”
Section: A Discussion Archivesmentioning
confidence: 99%
“…This defense system employs a selfadaptive learning framework to detect and avoid suspicious results falling in the category of false negative to take part of the training process. [71] is an attack-agnostic defense system against poisoning attacks named De-Pois. The defense strategy depicted in this work is not to attack-specific, i.e.…”
Section: Svm Resistance Enhancementmentioning
confidence: 99%
“…Muñoz-González et al [27] proposed a similar generative adversarial network based data poisoning method that uses a generator to generate poisoning samples which can maximize classification errors in the target task and meanwhile minimize the discriminator's ability in distinguishing poisoned data from normal data. These data poisoning methods are conducted on centralized datasets, where the attacker has strong prior knowledge of the training dataset [8]. Thus, they are not applicable when training data is decentralized and cannot be exchanged.…”
Section: Related Workmentioning
confidence: 99%
“…Toward robustness. In order for the models to defend against the deceptions or threats, researchers have set out to work on closing the gap between adversarial accuracy and the standard accuracy [6,34], and a wide range of defense methods for different types of attack [8,45,56]. Adversarial training, discussed in Section 2.3, is the most popular method against adversarial attacks.…”
Section: Related Workmentioning
confidence: 99%