Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/391
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Explanations for Knowledge Graph Embeddings

Abstract: We introduce Neural Contextual Anomaly Detection (NCAD), a framework for anomaly detection on time series that scales seamlessly from the unsupervised to supervised setting, and is applicable to both univariate and multivariate time series. This is achieved by combining recent developments in representation learning for multivariate time series, with techniques for deep anomaly detection originally developed for computer vision that we tailor to the time series setting. Our window-based approach facilitates le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…Description: An adversarial attack against knowledge graph embedding aims at identifying the training instances that are most influential to the model's predictions on test instances. Existing works in this area are limited (Bhardwaj et al 2021;Betz, Meilicke, and Stuckenschmidt 2022), and even more limited is the design of a defense mechanism to alleviate the effect of adversarial attacks against knowledge graph embedding methods.…”
Section: Cross Domain Clusteringmentioning
confidence: 99%
“…Description: An adversarial attack against knowledge graph embedding aims at identifying the training instances that are most influential to the model's predictions on test instances. Existing works in this area are limited (Bhardwaj et al 2021;Betz, Meilicke, and Stuckenschmidt 2022), and even more limited is the design of a defense mechanism to alleviate the effect of adversarial attacks against knowledge graph embedding methods.…”
Section: Cross Domain Clusteringmentioning
confidence: 99%
“…Rule ( 4) is a U c rule which states that a person is female, if she is married to a person that is male. A typical example for a U d rule is Rule (5), which says that an actor is someone who acts (in a film).…”
Section: Language Biasmentioning
confidence: 99%
“…The increased focus on sub-symbolic representations in the past decade, which was mainly driven by the flexibility and predictive quality achievable with end-to-end learning, somewhat conversely motivated the revival of the usage of symbolic methods due to an urgent need for explainability. Along these lines, it has been shown that AnyBURL can be utilized to explain predictions made by a latent model when restricting the types of learned rules [5]. Moreover, a symbolic model, when performing competitively in regard to predictive quality, represents a standalone alternative to a latent model as it is inherently explainable.…”
mentioning
confidence: 99%
“…AnyBURL (Meilicke et al, 2019) is the successor of RuleN (Meilicke et al, 2018). It is shown to be competitive to neural approaches (Rossi et al, 2021;Meilicke et al, 2023) and it can be utilized to explain predictions made by embedding models (Betz et al, 2022a). Other approaches are tailored towards large graphs (Fan et al, 2022;Chen et al, 2016) or to learn negative rules (Ortona et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…Rule application refers to predicting previously unseen facts given a set of rules and the input or training KG. We can describe it compactly with the recently introduced concept of one-step-entailment (Betz et al, 2022a). Let C be a set of rules and G a KG.…”
Section: Rules and Applicationmentioning
confidence: 99%