2022
DOI: 10.1145/3551636
|View full text |Cite
|
Sign up to set email alerts
|

A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning

Abstract: The prosperity of machine learning has been accompanied by increasing attacks on the training process. Among them, poisoning attacks have become an emerging threat during model training. Poisoning attacks have profound impacts on the target models, e.g., making them unable to converge or manipulating their prediction results. Moreover, the rapid development of recent distributed learning frameworks, especially federated learning, has further stimulated the development of poisoning attacks. Defending against po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 95 publications
(28 citation statements)
references
References 78 publications
0
28
0
Order By: Relevance
“…Malicious FL participants can try to manipulate the global model to either produce specific outputs for specific inputs or simply degrade the overall accuracy. These attacks are referred to as backdoor (Bagdasaryan et al, 2020;Xie et al, 2020) and poisoning attacks (Tian et al, 2022), respectively. In terms of poisoning attacks, the two options are to perform data poisoning (Biggio et al, 2012;Tolpegin et al, 2020) or model poisoning (Wang et al, 2020;Fang et al, 2020).…”
Section: Attacksmentioning
confidence: 99%
“…Malicious FL participants can try to manipulate the global model to either produce specific outputs for specific inputs or simply degrade the overall accuracy. These attacks are referred to as backdoor (Bagdasaryan et al, 2020;Xie et al, 2020) and poisoning attacks (Tian et al, 2022), respectively. In terms of poisoning attacks, the two options are to perform data poisoning (Biggio et al, 2012;Tolpegin et al, 2020) or model poisoning (Wang et al, 2020;Fang et al, 2020).…”
Section: Attacksmentioning
confidence: 99%
“…Their survey, however, falls short of assessing or demonstrating the connection between these attacks, as well as the connection between backdoor attacks and defenses. In Backdoor attacks and defenses [37] 2022 Backdoor attacks and defenses in FL [38] 2022 Poisoning attacks and countermeasures [39] 2022 FL challenges, contributions, and trends [40] 2021 Privacy-preserving FL [41] 2021…”
Section: Related Surveysmentioning
confidence: 99%
“…Privacy attacks in machine learning [5,22] include membership inference attacks [24], model reconstruction attacks such as attribute inference [29], model inversion attacks [11,10], and model extraction attacks [28]. Here, we focus on a form of model inversion attacks, namely, attribute inference attack.…”
Section: Attribute Inference Attackmentioning
confidence: 99%
“…Model inversion attacks try to recover sensitive features or the full data sample based on output labels and partial knowledge (subset of data) of some features [22,19]. [19] provided a summary of possible assumptions about adversary capabilities and resources for different model inversion attribute inference attacks.…”
Section: Attribute Inference Attackmentioning
confidence: 99%