2020
DOI: 10.48550/arxiv.2002.11293
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Ranking Attack and Defense

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 50 publications
0
5
0
Order By: Relevance
“…A similar pairwise robustness definition is given and they further define pairwise stability in the sense that the absolute difference of the score differences of two instances is bounded by the norm of the perturbation multiplied by some constant. Adversarial attacks for ranking are proposed for example in Zhou et al [2020], Raval and Verma [2020], He et al [2018] and Li et al [2019]. These approaches are inherently different to our setting since they assume an already trained ranking/recommender model.…”
Section: Relation To Adversarial Attacks and Other Ranking Robustness...mentioning
confidence: 99%
“…A similar pairwise robustness definition is given and they further define pairwise stability in the sense that the absolute difference of the score differences of two instances is bounded by the norm of the perturbation multiplied by some constant. Adversarial attacks for ranking are proposed for example in Zhou et al [2020], Raval and Verma [2020], He et al [2018] and Li et al [2019]. These approaches are inherently different to our setting since they assume an already trained ranking/recommender model.…”
Section: Relation To Adversarial Attacks and Other Ranking Robustness...mentioning
confidence: 99%
“…Furthermore, in adversarial attacks, the gradients for guiding the attack process are usually calculated based on the knowledge of the target model, e.g., the model structure and parameters. Existing studies on adversarial attacks against IR systems mainly focus on the white-box attacks in which attackers are assumed to have complete knowledge about the target model [43,26,38,35], so the gradients can be directly acquired. However, the underlying white-box assumption does not hold in reality.…”
Section: Corresponding Authormentioning
confidence: 99%
“…I chose this case study because image retrieval based on text queries is a popular, real-world use case for neural models (e.g., Google Image Search, iStock, Getty Images, etc. ), and because prior work has shown that these models can potentially be fooled using adversarial perturbations [221] (although not in the context of fairness). To strengthen my case study, I adopt a strict threat model under which the adversary cannot poison training data [99] for the ranking model, and has no knowledge of the ranking model or fairness algorithm used by the victim search engine.…”
Section: Subverting Fair Image Search With Generative Adversarial Per...mentioning
confidence: 99%
“…My work considers adversarial ML attacks on IR systems. Previous work has demonstrated successful attacks on image-to-image search systems [132,221], allowing an adversary to control the results of targeted image queries using localized patches [34] or universal adversarial perturbations 5 [125]. Other work has demonstrated attacks on text-to-text retrieval systems [167] and personalized ranking systems [92].…”
Section: Adversarial Attacksmentioning
confidence: 99%
See 1 more Smart Citation