2022
DOI: 10.48550/arxiv.2205.08012
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CascadER: Cross-Modal Cascading for Knowledge Graph Link Prediction

Abstract: Knowledge graph (KG) link prediction is a fundamental task in artificial intelligence, with applications in natural language processing, information retrieval, and biomedicine. Recently, promising results have been achieved by leveraging cross-modal information in KGs, using ensembles that combine knowledge graph embeddings (KGEs) and contextual language models (LMs). However, existing ensembles are either (1) not consistently effective in terms of ranking accuracy gains or (2) impractically inefficient on lar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Due to the recency of [37], we leave additional comparisons beyond our real GPT2 baseline to future work. Applying language models to knowledge graphs has been investigated in the general [91,30,93] and scientific domains [49,63]. They can be considered similar to our tests of BERT language models applied to a drug synergy hypergraph ( § 4.1).…”
Section: Language Models For Chemistry and Knowledge Graph Completionmentioning
confidence: 99%
“…Due to the recency of [37], we leave additional comparisons beyond our real GPT2 baseline to future work. Applying language models to knowledge graphs has been investigated in the general [91,30,93] and scientific domains [49,63]. They can be considered similar to our tests of BERT language models applied to a drug synergy hypergraph ( § 4.1).…”
Section: Language Models For Chemistry and Knowledge Graph Completionmentioning
confidence: 99%
“…two approaches are completely different learning paradigms, it is difficult to fully utilize both types of features within a single learning model, especially deep features 14 . These issues prevent fully leveraging the combination of structural and semantic features, which is crucial for effective prediction.…”
mentioning
confidence: 99%