2020
DOI: 10.48550/arxiv.2007.10760
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

Yansong Gao,
Bao Gia Doan,
Zhi Zhang
et al.

Abstract: Backdoor attacks insert hidden associations or triggers to the deep learning models to override correct inference such as classification and make the system perform maliciously according to the attacker-chosen target while behaving normally in the absence of the trigger. As a new and rapidly evolving realistic attack, it could result in dire consequences, especially considering that the backdoor attack surfaces are broad. In 2019, the U.S. Army Research Office started soliciting countermeasures and launching T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
77
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 45 publications
(77 citation statements)
references
References 133 publications
(364 reference statements)
0
77
0
Order By: Relevance
“…Therefore, this work mainly leverages the data poison to insert a backdoor into the object detector. This attacking strategy has been shown to be very effective to implant backdoor into various models [7]. Opposed to change the label to the targeted class when backdooring a classification model [5], here we modify the annotation of the Poisoning the Training Dataset.…”
Section: Attacking Strategymentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, this work mainly leverages the data poison to insert a backdoor into the object detector. This attacking strategy has been shown to be very effective to implant backdoor into various models [7]. Opposed to change the label to the targeted class when backdooring a classification model [5], here we modify the annotation of the Poisoning the Training Dataset.…”
Section: Attacking Strategymentioning
confidence: 99%
“…As this is the first work that investigates the backdoor vulnerability of the object detectors built up DL, there is so far no existing solution to such attack. As a matter of fact, to the best of our knowledge, nearly all (if not all) of the DL backdoor countermeasures are focusing on the classification tasks, especially image classifications [7]. They (i.e., state-ofthe-arts of [39]- [42]) are not immediately mountable on protecting object detection task, which is beyond the classification task.…”
Section: Countermeasuresmentioning
confidence: 99%
See 1 more Smart Citation
“…To avoid or mitigate the effects of backdoor attacks on collaborative learning systems, several backdoor defense methods have been proposed [74], [130]- [132]. We divide existing methods into two categories based on the subject of inspection: data and the model inspection.…”
Section: B Backdoor Defensesmentioning
confidence: 99%
“…Data inspection defenses try to distinguish poisoned data from normal ones, while the model inspection approach [48], [49] relies on anomaly technique to distinguish abnormal behaviour of the models caused by backdoors [130]. These defenses can be carried out during or after the training processing.…”
Section: B Backdoor Defensesmentioning
confidence: 99%