Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.648
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods

Abstract: Despite the widespread use of Knowledge Graph Embeddings (KGE), little is known about the security vulnerabilities that might disrupt their intended behaviour. We study data poisoning attacks against KGE models for link prediction. These attacks craft adversarial additions or deletions at training time to cause model failure at test time. To select adversarial deletions, we propose to use the model-agnostic instance attribution methods from Interpretable Machine Learning, which identify the training instances … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…To address this, we adopt an estimate approach inspired by the Influence Function. 50,51 We first upweight z with a small weight ε and define the new optimal embeddings as . We then calculate the impact of adding z m on as follows: where is the Hessian matrix, computed as .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To address this, we adopt an estimate approach inspired by the Influence Function. 50,51 We first upweight z with a small weight ε and define the new optimal embeddings as . We then calculate the impact of adding z m on as follows: where is the Hessian matrix, computed as .…”
Section: Methodsmentioning
confidence: 99%
“…To address this, we adopt an estimate approach inspired by the Influence Function. 50,51 We first upweight with a small weight and define the 𝑧 .…”
Section: Selecting Poisoning Targetmentioning
confidence: 99%
“…Description: An adversarial attack against knowledge graph embedding aims at identifying the training instances that are most influential to the model's predictions on test instances. Existing works in this area are limited (Bhardwaj et al 2021;Betz, Meilicke, and Stuckenschmidt 2022), and even more limited is the design of a defense mechanism to alleviate the effect of adversarial attacks against knowledge graph embedding methods.…”
Section: Cross Domain Clusteringmentioning
confidence: 99%