Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.489
|View full text |Cite
|
Sign up to set email alerts
|

A Re-evaluation of Knowledge Graph Completion Methods

Abstract: Knowledge Graph Completion (KGC) aims at automatically predicting missing links for large-scale knowledge graphs. A vast number of state-of-the-art KGC techniques have got published at top conferences in several research fields, including data mining, machine learning, and natural language processing. However, we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods. In this paper, we find that this can be attributed to the inappropriate eva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
61
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 96 publications
(62 citation statements)
references
References 20 publications
1
61
0
Order By: Relevance
“…For this purpose, one may implement a TokenPoolEmbedder. The simple changes to the configuration that uses the new embedder type are demonstrated in Figure 3 the currently known best practices, LIBKGE also includes-and makes configurable-some settings that might not be considered best practice, e.g., different tie breaking schemes for ranking evaluations (Sun et al, 2020). Therefore, with regards to configurability, the goal is not only that the frame- work reflects best practices, but also reflects popular practices that might influence ongoing research.…”
Section: Configurability and Reproducibilitymentioning
confidence: 99%
See 2 more Smart Citations
“…For this purpose, one may implement a TokenPoolEmbedder. The simple changes to the configuration that uses the new embedder type are demonstrated in Figure 3 the currently known best practices, LIBKGE also includes-and makes configurable-some settings that might not be considered best practice, e.g., different tie breaking schemes for ranking evaluations (Sun et al, 2020). Therefore, with regards to configurability, the goal is not only that the frame- work reflects best practices, but also reflects popular practices that might influence ongoing research.…”
Section: Configurability and Reproducibilitymentioning
confidence: 99%
“…For entity ranking evaluation, only LIBKGE and PyKeen transparently implement different tie breaking schemes for equally ranked entities. This is important, because evaluation under different tie breaking schemes can result in differences of ≈ .40 MRR in some models and can lead to misleading conclusions, as shown by Sun et al (2020). OpenKE, for example, only supports the problematic tie breaking scheme named TOP by Sun et al (2020).…”
Section: Hyperparameter Optimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…ConvE (Dettmers et al, 2018) is the first model to apply CNN for KGE, which uses 2D convolution operation to model the head and relation in a query. However, ConvE is limited by the number of interactions between the head and relation embeddings Vashishth et al, 2020). In this paper, we propose to employ the Inception network (Szegedy et al, 2015(Szegedy et al, , 2016, a high performing convolutional neural network with carefully designed filters, to increase the interactions by taking the head and relation as two channels of the input.…”
Section: Inception-based Query Encodermentioning
confidence: 99%
“…Through this, ConvE can increase the interactions between head and relation embeddings. Empirical results have proved that increasing the number of interactions is beneficial to the KGE task, but ConvE is still limited by the number of interactions Vashishth et al, 2020). Furthermore, ConvE does not consider the structural information.…”
Section: Introductionmentioning
confidence: 99%