2022
DOI: 10.1109/ojsp.2022.3190213
|View full text |Cite
|
Sign up to set email alerts
|

An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

Abstract: Together with impressive advances touching every aspect of our society, AI technology based on Deep Neural Networks (DNN) is bringing increasing security concerns. While attacks operating at test time have monopolised the initial attention of researchers, backdoor attacks, exploiting the possibility of corrupting DNN models by interfering with the training process, represents a further serious threat undermining the dependability of AI techniques. In a backdoor attack, the attacker corrupts the training data s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(16 citation statements)
references
References 108 publications
0
16
0
Order By: Relevance
“…Most closely related is the line of work on data poisoning attacks that seeks to understand how data points can be adversarially "poisoned" to degrade the performance of a predictive model at test time. We refer to recent surveys for an overview of data poisoning attacks [Tian et al, 2022], and backdoor attacks more specifically [Guo et al, 2022]. While the literature on poisoning attacks focuses predominantly on diminishing the performance of the learning algorithm, documented empirical successes [Cherepanova et al, 2021, Geiping et al, 2021 hint at the impact that algorithmic collective action can have on deep learning models.…”
Section: Related Workmentioning
confidence: 99%
“…Most closely related is the line of work on data poisoning attacks that seeks to understand how data points can be adversarially "poisoned" to degrade the performance of a predictive model at test time. We refer to recent surveys for an overview of data poisoning attacks [Tian et al, 2022], and backdoor attacks more specifically [Guo et al, 2022]. While the literature on poisoning attacks focuses predominantly on diminishing the performance of the learning algorithm, documented empirical successes [Cherepanova et al, 2021, Geiping et al, 2021 hint at the impact that algorithmic collective action can have on deep learning models.…”
Section: Related Workmentioning
confidence: 99%
“…It can be seen that some existing surveys, e.g., [44], [45], [46], and [47] considered backdoor attacks and backdoor defenses as a part of robustness threat on WFL, however, the limitations of the existing backdoor attack and defense methods were not highlighted. On the other hand, in [48], [49], [50], and [51], WFL was considered as one of the deep learning applications when discussing the impact of backdoor attacks, but no detailed analysis of vulnerabilities of backdoor attacks on WFL was provided. In [52] and [53], the theoretical working mechanisms of backdoor attacks and defenses for WFL were reviewed.…”
Section: B Review Of Existing Surveys and Gap Analysismentioning
confidence: 99%
“…However, the limitations of the existing methodologies were not systematically analyzed. In the survey [50], the backdoor attack strategies were discussed in depth, but they were not discussed in the context of WFL. Also, to the best of the authors' knowledge, none of the existing surveys has taken WCN into consideration.…”
Section: B Review Of Existing Surveys and Gap Analysismentioning
confidence: 99%
“…The unlearning strategy first detects the malicious behavior and then defend against the backdoor attack by performing reverse learning through utilizing the implicit hypergradient [ZCP + 21] or modifying the loss function [LLK + 21]. Moreover, the comprehensive backdoor attack surveys can be found in [LJLX22] and [GTB22].…”
Section: Defending Backdoor Attackmentioning
confidence: 99%