Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467335
|View full text |Cite
|
Sign up to set email alerts
|

Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 40 publications
(19 citation statements)
references
References 25 publications
0
19
0
Order By: Relevance
“…Inspired by the GAN's application [28] in the recommendation, some work [7,29] used GAN to generate real-like fake ratings to bypass the detection. To address traditional GAN's inability to evaluate attacking damage, Wu et al [5] drew inspiration from TripleGAN [30] and introduced an attack module that conducts triple adversarial learning to generate malicious users. Advancements in deep learning have led Huang et al [18] to propose a poison attack on DL-based RSs.…”
Section: Poisoning Attacks In Recommender Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired by the GAN's application [28] in the recommendation, some work [7,29] used GAN to generate real-like fake ratings to bypass the detection. To address traditional GAN's inability to evaluate attacking damage, Wu et al [5] drew inspiration from TripleGAN [30] and introduced an attack module that conducts triple adversarial learning to generate malicious users. Advancements in deep learning have led Huang et al [18] to propose a poison attack on DL-based RSs.…”
Section: Poisoning Attacks In Recommender Systemsmentioning
confidence: 99%
“…In comparison to heuristic attacks, optimization-based poisoning attacks constitute a more efficacious approach for adapting to a broader spectrum of recommendation patterns [5]. However, existing work ignores the bi-level setting of poisoning attacks, which limits the attack performance.…”
Section: Cooperative Training Attackmentioning
confidence: 99%
“…In [6,22], they propose GANs-based models to approximate the distribution of real users to generate fake user profiles based on estimated gradients. TripleAttack [35] further extends GANs to efficiently generate fake user profiles based on TripleGAN model. However, these white-box and grey-box attacking methods require full or partial knowledge about the target system and training dataset, which are difficult to be carried over to the real world due to security and privacy concerns.…”
Section: Related Workmentioning
confidence: 99%
“…Different from poisoning attack on general classification models, recommender system poisoning methods may aim to promote a set of target items due to financial incentives [7,37,46,58,59]. For example, Li et al [18] proposed to generate malicious samples that optimize the tradeoff between maximizing overall benign recommendation errors and promoting ratings of target items.…”
Section: Related Workmentioning
confidence: 99%