Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/379
|View full text |Cite
|
Sign up to set email alerts
|

Revision by Comparison for Ranking Functions

Abstract: Relational graph neural networks have garnered particular attention to encode graph context in knowledge graphs (KGs). Although they achieved competitive performance on small KGs, how to efficiently and effectively utilize graph context for large KGs remains an open problem. To this end, we propose the Relation-based Embedding Propagation (REP) method. It is a post-processing technique to adapt pre-trained KG embeddings with graph context. As relations in KGs are directional, we model the incoming head context… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 1 publication
0
1
0
Order By: Relevance
“…There have been several approaches to belief revision with input information that is accompanied by some kind of supplementary information, see e.g. (Hunter 2021;Ammar and Ismail 2021;Sezgin and Kern-Isberner 2022), indicating trust or reliance towards the input information. In this paper, we present a revision method in a qualitative and a semi-quantitative framework based on Bounded Revision (BR) which was firstly introduced by (Rott 2012).…”
Section: Introductionmentioning
confidence: 99%
“…There have been several approaches to belief revision with input information that is accompanied by some kind of supplementary information, see e.g. (Hunter 2021;Ammar and Ismail 2021;Sezgin and Kern-Isberner 2022), indicating trust or reliance towards the input information. In this paper, we present a revision method in a qualitative and a semi-quantitative framework based on Bounded Revision (BR) which was firstly introduced by (Rott 2012).…”
Section: Introductionmentioning
confidence: 99%