Proceedings of the 29th ACM International Conference on Information &Amp; Knowledge Management 2020
DOI: 10.1145/3340531.3411884
|View full text |Cite
|
Sign up to set email alerts
|

Attacking Recommender Systems with Augmented User Profiles

Abstract: Recommendation Systems (RS) have become an essential part of many online services. Due to its pivotal role in guiding customers towards purchasing, there is a natural motivation for unscrupulous parties to spoof RS for profits. In this paper, we study the shilling attack: a subsistent and profitable attack where an adversarial party injects a number of user profiles to promote or demote a target item. Conventional shilling attack models are based on simple heuristics that can be easily detected, or directly ad… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
41
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 53 publications
(41 citation statements)
references
References 33 publications
0
41
0
Order By: Relevance
“…Adversaries often regard the GCN model as their attack target, due to its outstanding performance on node classification. Traditionally, a GCN model with two layers is represented by: z = sof tmax( Âσ( ÂXW (1) )W (2) ),…”
Section: Nettackmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversaries often regard the GCN model as their attack target, due to its outstanding performance on node classification. Traditionally, a GCN model with two layers is represented by: z = sof tmax( Âσ( ÂXW (1) )W (2) ),…”
Section: Nettackmentioning
confidence: 99%
“…Using different adversarial attack methods, adversaries can downgrade the performance of graph data mining algorithms by adding or removing some nodes or links. For example, adversaries can register fake accounts and add some fake records to mislead a recommendation system to recommend illegal products to ordinary users, leading to serious consequences [1], [2].…”
Section: Introductionmentioning
confidence: 99%
“…the correctness of the learned recommender in the presence of malicious users, where the trained model can be induced to deliver altered results as the adversary desires, e.g., promoting a target product 1 such that it gets more exposures in the item lists recommended to users. This is termed poisoning attack [12,25], which is usually driven by financial incentives but incurs strong unfairness in a trained recommender. In light of both privacy and security issues, there has been a recent surge in decentralizing recommender systems, where federated learning [28,33,34] appears to be one of the most representative solutions.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, it also prevents a recommender from being poisoned. The rationale is, in recommendation, the predominant poisoning method for manipulating item promotion is data poisoning, where the adversary manipulates malicious users to generate fake yet plausible user-item interactions (e.g., ratings), making the trained recommender biased towards the target item [12,13,25,36]. However, these aforementioned data poisoning attacks operate under the assumption that the adversary has prior knowledge about the entire training dataset.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation