2020 IEEE 36th International Conference on Data Engineering (ICDE) 2020
DOI: 10.1109/icde48307.2020.00021
|View full text |Cite
|
Sign up to set email alerts
|

PoisonRec: An Adaptive Data Poisoning Framework for Attacking Black-box Recommender Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(38 citation statements)
references
References 23 publications
0
38
0
Order By: Relevance
“…There are several advanced recommendation technologies proposed in recent years, such as the deep-learning based models (Zhang et al, 2019;Tang et al, 2019). Accordingly, data poisoning attacks including PoisonRec and LOKI (Song et al, 2020) against deep-learning based recommendation techniques have also been developed. In this paper, we only focus on the attack detection problem in collaborative filtering recommendations, which remains the malicious threats on other advanced recommendation techniques unconsidered.…”
Section: Discussion and Limitationmentioning
confidence: 99%
“…There are several advanced recommendation technologies proposed in recent years, such as the deep-learning based models (Zhang et al, 2019;Tang et al, 2019). Accordingly, data poisoning attacks including PoisonRec and LOKI (Song et al, 2020) against deep-learning based recommendation techniques have also been developed. In this paper, we only focus on the attack detection problem in collaborative filtering recommendations, which remains the malicious threats on other advanced recommendation techniques unconsidered.…”
Section: Discussion and Limitationmentioning
confidence: 99%
“…In this case, we follow co-visitation approach [36,42] and apply adversarial example techniques [8] Note that though the alternating pattern of co-visitation seems detectable, we can control the proportion of target items (apply noise or add 'user-like' generated data) to avoid this rigid pattern and make it less noticeable. Gradient information from the white-box model also empowers more tailored sequential-recommendation poisoning methods.…”
Section: Datamentioning
confidence: 99%
“…We avoid repeating targets in injection items to rule out trivial solutions like sequence of solely target items. Based on this setting, we introduce the following baseline methods: (1) RandAlter: alternating interactions with random items and target item(s) [36]; Items w/ Different Popularities. Table 7 shows our profile pollution attacks to items with different popularities.…”
Section: Rq3: Can We Perform Profile Pollution Attacks Using the Extracted Model?mentioning
confidence: 99%
“…Song et al [150] proposed an adaptive data poisoning framework named PoisonRec, it leverages reinforcement learning to inject false user data into the recommendation system, which can automatically learn effective attack strategies for various recommendation systems with very limited knowledge.…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…In terms of text adversarial generation, many studies [133,138,139,141] try to get the keywords in the sentence by frequently querying the target model, however, the action of the frequent query is easy to be detected and defended when the attack is performed on the real system. In terms of adversarial example generation in the recommendation system, the attacker poisons the recommendation system by modifying the edge and attribute information of some nodes in the social network graph [146,147,150,151], however, in the real social network, the node that the attacker chooses to modify may be a node that is not controlled by the attacker. Therefore, in the subsequent research on adversarial example generation algorithms in the field of social networks, more consideration should be given to the limitations in real scenarios.…”
Section: Constraints For Attacks On Real Systemsmentioning
confidence: 99%