2022
DOI: 10.48550/arxiv.2211.02744
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

KGLM: Integrating Knowledge Graph Structure in Language Models for Link Prediction

Abstract: The ability of knowledge graphs to represent complex relationships at scale has led to their adoption for various needs including knowledge representation, question-answering, fraud detection, and recommendation systems. Knowledge graphs are often incomplete in the information they represent, necessitating the need for knowledge graph completion tasks, such as link and relation prediction. Pre-trained and fine-tuned language models have shown promise in these tasks although these models ignore the intrinsic in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…The link prediction models were also calibrated using isotonic regression to provide an interpretable probability score. Link prediction models are commonly evaluated using rank-based metrics like mean rank (MR), mean reciprocal rank (MRR), hits@1, hits@3, and hits@10 47 . However, our end goal was to generate hypotheses that were either true or false, and therefore, we decided to also evaluate using standard binary classification metrics like confusion matrix, precision, and recall.…”
Section: Methodsmentioning
confidence: 99%
“…The link prediction models were also calibrated using isotonic regression to provide an interpretable probability score. Link prediction models are commonly evaluated using rank-based metrics like mean rank (MR), mean reciprocal rank (MRR), hits@1, hits@3, and hits@10 47 . However, our end goal was to generate hypotheses that were either true or false, and therefore, we decided to also evaluate using standard binary classification metrics like confusion matrix, precision, and recall.…”
Section: Methodsmentioning
confidence: 99%
“…Due to the recency of [37], we leave additional comparisons beyond our real GPT2 baseline to future work. Applying language models to knowledge graphs has been investigated in the general [91,30,93] and scientific domains [49,63]. They can be considered similar to our tests of BERT language models applied to a drug synergy hypergraph ( § 4.1).…”
Section: Language Models For Chemistry and Knowledge Graph Completionmentioning
confidence: 99%