Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval 2020
DOI: 10.1145/3397271.3401301
|View full text |Cite
|
Sign up to set email alerts
|

Data Poisoning Attacks against Differentially Private Recommender Systems

Abstract: Recommender systems based on collaborative filtering are highly vulnerable to data poisoning attacks, where a determined attacker injects fake users with false user-item feedback, with an objective to either corrupt the recommender system or promote/demote a target set of items. Recently, differential privacy was explored as a defense technique against data poisoning attacks in the typical machine learning setting. In this paper, we study the effectiveness of differential privacy against such attacks on matrix… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…Several strategies are discussed in the literature to attack smart systems such as poisoning attacks, 27–32 evasion attacks, 30,33–35 and model extraction. 36,37 Interested readers are encouraged to refer to a recent publication.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Several strategies are discussed in the literature to attack smart systems such as poisoning attacks, 27–32 evasion attacks, 30,33–35 and model extraction. 36,37 Interested readers are encouraged to refer to a recent publication.…”
Section: Related Workmentioning
confidence: 99%
“…Attackers' tactics (or strategy) involve the type of perturbations, the functions that create them, and how the attackers may launch the attack. 24,26 Several strategies are discussed in the literature to attack smart systems such as poisoning attacks, [27][28][29][30][31][32] evasion attacks, 30,[33][34][35] and model extraction. 36,37 Interested readers are encouraged to refer to a recent publication.…”
Section: Attacks Against MLmentioning
confidence: 99%
“…Recently, some studies have established a link between poisoning attacks and privacy (Ma et al, 2019;Cao et al, 2019), although these have been in different domains such as image classification (Geiping et al, 2021) and recommendation systems (Wadhwa et al, 2020), and with different types of attack to those considered herein. Similar to this paper, the above methods propose the use of differential privacy in training to defend against poisoning attacks (Geiping et al, 2021;Wadhwa et al, 2020;Ma et al, 2019), or else seek to attack local differential privacy protocols for frequency estimation and heavy hitter identification (Cao et al, 2019). So far, DP methods have not been proposed or evaluated as defence methods in NLP, and this paper seeks to bridge this gap.…”
Section: Attacks and Evaluationmentioning
confidence: 99%
“…When applied to a poisoned training set, this defence method limits the effect of the poison examples, among other effects, thus mitigating the poisoning attack. Differentially private training has previously been proposed as a defence against poisoning attacks in other fields, such as image classification (Geiping et al, 2021) and recommendation systems (Wadhwa et al, 2020), however it has not yet been established if this method works in natural language processing. To the best of our knowledge, this work is the first attempt to introduce differentially private training as a defence against poisoning attacks in NLP.…”
Section: Introductionmentioning
confidence: 99%
“…While originally developed to preserve user privacy, DP training also conveys a degree of resistance to data poisoning since modifications to a small number of samples can have only a limited impact on the resulting model ). Bounds on the poisoning vulnerability of recommendation systems that use DP matrix factorization techniques have been derived and tested empirically (Wadhwa et al 2020).…”
Section: Applications Of Data Poisoningmentioning
confidence: 99%