2021
DOI: 10.1007/978-981-16-8059-5_36
|View full text |Cite
|
Sign up to set email alerts
|

Threats on Machine Learning Technique by Data Poisoning Attack: A Survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 39 publications
0
13
0
Order By: Relevance
“…Attacking online platforms is often possible [2,33], where the ultimate goal of an attacker is to exploit vulnerabilities in the platform's algorithms and generate malicious results that further their interests. [1,12]. Data poisoning attack is one of such harmful and practical attacks [1,41,42], where false information and malicious inputs are injected into the dataset to train a model, resulting in biased or incorrect predictions [26,38].…”
Section: Related Work 21 Data Poisoning Attackmentioning
confidence: 99%
See 2 more Smart Citations
“…Attacking online platforms is often possible [2,33], where the ultimate goal of an attacker is to exploit vulnerabilities in the platform's algorithms and generate malicious results that further their interests. [1,12]. Data poisoning attack is one of such harmful and practical attacks [1,41,42], where false information and malicious inputs are injected into the dataset to train a model, resulting in biased or incorrect predictions [26,38].…”
Section: Related Work 21 Data Poisoning Attackmentioning
confidence: 99%
“…On online job platforms, in general, several vulnerabilities exist: (1) it is easy to create multiple accounts of job seekers (although such clearly violates terms-of-services); (2) it is easy for job seekers to write fake experiences in their resumes (thus "fake resumes"); and (3) most of users' career trajectories that prediction models are trained with are self-reported but seldom validated due to high cost to authenticate such trajectories with official documents. A recent episode in 2022 demonstrated this vulnerability well, where 1,000 non-existent Chinese SpaceX engineers with fake profiles were found registered on LinkedIn 1 . Compared with other adversarial attacks (e.g., graph adversarial attack [4,11,46]), therefore, a data poisoning attack via fake resumes present significant advantages for adversaries to attack (while significant challenges for online job platforms to defend), yet our understanding on the attacks and potential defenses on online job platforms is rather limited.…”
mentioning
confidence: 94%
See 1 more Smart Citation
“…Machine learning consists of two stages: the training stage and the testing stage. The poisoning attack [19,20] is the best-known attack method in the training stage, and the attacks aiming at the testing stage include the membership inference attack [21], the evasion attack [22], and the model extraction attack [23,24]. In this paper, we study security and privacy issues in the testing stage of BRFD.…”
Section: Introductionmentioning
confidence: 99%
“…Despite their success at solving many NLP problems, the transformers are vulnerable to adversarial attacks [2][3][4]. The attack that we are examining belongs to a family of data poisoning attacks [5,6]. Data poisoning attacks are backdoor attacks -where the training data is "poisoned" by inserting "artifacts" into the repositories.…”
Section: Introductionmentioning
confidence: 99%