2019
DOI: 10.1007/978-3-030-21348-0_26
|View full text |Cite
|
Sign up to set email alerts
|

AYNEC: All You Need for Evaluating Completion Techniques in Knowledge Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Bonner et al [3] 2017 Topological structure Model GEM [7] 2018 Clas, Clu, LP, Net Comp, Vis Model Rulinda et al [16] [11] 2019 LP Model AYNEC [2] 2019 LP -Bogumil et al [9] 2019 Clu Model GEval 2019 Clas, Clu, DocSim, EntRel, Reg, SemAn…”
Section: Year Tasks Embedding Techniquementioning
confidence: 99%
See 1 more Smart Citation
“…Bonner et al [3] 2017 Topological structure Model GEM [7] 2018 Clas, Clu, LP, Net Comp, Vis Model Rulinda et al [16] [11] 2019 LP Model AYNEC [2] 2019 LP -Bogumil et al [9] 2019 Clu Model GEval 2019 Clas, Clu, DocSim, EntRel, Reg, SemAn…”
Section: Year Tasks Embedding Techniquementioning
confidence: 99%
“…Also AYNEC [2] focuses on the link prediction task. It provides some incomplete graphs as a training set.…”
Section: Vectorsmentioning
confidence: 99%
“…We evaluated CHAI using four different Knowledge Graphs that are openly available and commonly used for the task of KG completion: FB13 [26], WN18 [6] (which are subsets of Freebase [3] and Wordnet [20], respectively), a subset of NELL introduced by Gardner and Mitchell [11], and EPSRC 1 , which contains information about the grants provided by the Engineering and Physical Sciences Research Council of the United Kingdom. All of these datasets were obtained from the publicly available AYNEC-DataGen tool [1], and an overview of their metadata can be found in Table 1. We used CHAI to generate rules for every relation in every dataset, except for the NELL dataset, in which we focused on the same subset of 10 relations as Gardner and Mitchell [11] due to the high number of total relations.…”
Section: Setup and Datasetsmentioning
confidence: 99%
“…To accomplish their goal, these algorithms typically train a model that takes candidate triples as input and outputs whether they are correct (positive) and should be added to the graph, or incorrect (negative) and should be discarded [6]. The knowledge graph at hand must thus be divided into a training, a validation (optionally) and a test splits (subgraphs) [2].…”
Section: Introductionmentioning
confidence: 99%
“…As a result, several strategies have been proposed to generate negative counterparts [14][15][16][17]. The accuracy of completion models is known to be highly dependent on the negative triples used during training [2,5,11,15,27]. However, as far as we know, a thorough analysis on how accuracy is impacted by the different negative triple generation strategies is still missing.…”
Section: Introductionmentioning
confidence: 99%