2022
DOI: 10.1007/s10489-022-03601-5
|View full text |Cite
|
Sign up to set email alerts
|

TBDRI: block decomposition based on relational interaction for temporal knowledge graph completion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…StarE (Galkin et al, 2020) leverages a message passing network, CompGCN (Vashishth et al, 2020), as an encoder to obtain the relation and entity embeddings, which are then fed into a transformer decoder to obtain the validity of facts. Hy-Transformer (Yu and Yang, 2021), GRAN and QUAD (Shomer et al, 2022) further improve it with alternative designs of encoders and via auxiliary training tasks. Relatively, these models, though useful, require a large number of parameters and are prone to overfitting.…”
Section: Key-value Pairsmentioning
confidence: 99%
See 3 more Smart Citations
“…StarE (Galkin et al, 2020) leverages a message passing network, CompGCN (Vashishth et al, 2020), as an encoder to obtain the relation and entity embeddings, which are then fed into a transformer decoder to obtain the validity of facts. Hy-Transformer (Yu and Yang, 2021), GRAN and QUAD (Shomer et al, 2022) further improve it with alternative designs of encoders and via auxiliary training tasks. Relatively, these models, though useful, require a large number of parameters and are prone to overfitting.…”
Section: Key-value Pairsmentioning
confidence: 99%
“…Baselines We compare ShrinkE against various models, including m-TransH (Wen et al, 2016), RAE (Zhang et al, 2018), NaLP-Fix (Rosso et al, 2020), HINGE (Rosso et al, 2020), NeuInfer (Guan et al, 2020, BoxE (Abboud et al, 2020), Transformer andStarE (Galkin et al, 2020). Note that we exclude Hy-Transformer (Yu and Yang, 2021), GRAN and QUAD (Shomer et al, 2022) and 2) they leverage auxiliary training tasks, which can also be incorporated into our framework and we leave as one future work.…”
Section: Environments and Hyperparametersmentioning
confidence: 99%
See 2 more Smart Citations
“…To address this issue, various temporal knowledge graph embedding (KGE) models are proposed to encode entities and relations in a low-dimensional space using translation-based functions [1,2], a deep neural networkbased method [3,4], and a tensor decomposition method [5,6]. As these methodologies have progressed, numerous strategies for TKG completion have been proposed, including geometric methods (ChronoR [7], BoxTE [8], TLT-KGE [9], RotateQVS [10], HTTR [11], PTKE [12]); tensor decomposition methods (QDN [13], TBDRI [14]); deep learning and embedding-based methods (TASTER [15], TeAST [16], RoAN [17], BiQCap [18]); and graph neural network-based reasoning methods (TARGCN [19], T-GAP [20], TAL-TKGC [21]). Interpolation is a statistical method that utilizes the relevant known values to estimate an unknown value or set [19].…”
Section: Introductionmentioning
confidence: 99%