2020
DOI: 10.48550/arxiv.2008.12813
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HittER: Hierarchical Transformers for Knowledge Graph Embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…HittER is a hierarchical transformer model designed to learn the entity and relationship representations of knowledge graphs. This model demonstrates good performance in link prediction and question answering for datasets, such as FreebaseQA and WebQuestionSP [36]. KG-BERT uses a pretrained language model, such as BERT, to process the entity, relationship, and triple of the knowledge graph as a text sequence [37].…”
Section: Knowledge Graph Embedding and Completionmentioning
confidence: 99%
“…HittER is a hierarchical transformer model designed to learn the entity and relationship representations of knowledge graphs. This model demonstrates good performance in link prediction and question answering for datasets, such as FreebaseQA and WebQuestionSP [36]. KG-BERT uses a pretrained language model, such as BERT, to process the entity, relationship, and triple of the knowledge graph as a text sequence [37].…”
Section: Knowledge Graph Embedding and Completionmentioning
confidence: 99%
“…Recently, there has been a lot of interest in modifying this attention to further meet various desired specifications, e.g., to encode syntax trees (Strubell et al, 2018;Wang et al, 2019c), character-word lattice structures , as well as relative positions between words (Shaw et al, 2018;Wang et al, 2019a). There are also a few recent attempts that apply vanilla Transformer (Wang et al, 2019b) or hierarchical Transformer (Chen et al, 2020) to KGs, but mainly restricted to binary relations and deployed with conventional attention. This work, in contrast, deals with higher-arity relational data represented as heterogeneous graphs, and employs modified attention to encode graph structure and heterogeneity.…”
Section: Related Workmentioning
confidence: 99%
“…Though some efforts have been made to incorporate neighborhood into the KGC algorithms (Chen et al, 2020), our proposed pipeline inherits the benefits of generative LLMs compared to KGE approaches: (1) scalability and size of the model, (2) applicability to both transductive and inductive KGs due to an ability to generalize unseen entities, (3) no need to rank all possible candidate triplets due to direct generation of tail entity. Also, a concurrent work by the KGT5 authors, KGT5context (Kochsiek et al, 2023), proposed a similar idea of integrating node neighborhood in the context of the generative LM model, supporting the main concerns and results of our study.…”
Section: Introductionmentioning
confidence: 99%