2022
DOI: 10.1007/s10489-022-03913-6
|View full text |Cite
|
Sign up to set email alerts
|

An improving reasoning network for complex question answering over temporal knowledge graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…We show that our approach outperforms the datasets MetaQA, WebQSP, and CompWebQSP, especially for complex datasets such as WebQSP and CompWebQ. In the future, we plan to investigate the following main issues: how to support dynamic application scenarios in open domains and the rapid updating of knowledge [30] and how to introduce external knowledge, such as knowledge from web pages, to expand the knowledge coverage of the system [31].…”
Section: Discussionmentioning
confidence: 99%
“…We show that our approach outperforms the datasets MetaQA, WebQSP, and CompWebQSP, especially for complex datasets such as WebQSP and CompWebQ. In the future, we plan to investigate the following main issues: how to support dynamic application scenarios in open domains and the rapid updating of knowledge [30] and how to introduce external knowledge, such as knowledge from web pages, to expand the knowledge coverage of the system [31].…”
Section: Discussionmentioning
confidence: 99%
“…They used a time neighborhood aggregation framework to model interactions between entities. Jiao et al [22] presented an enhanced Complex Temporal Reasoning Network that improves complex temporal reasoning for temporal reasoning problems and captures implicit temporal features and relation representations. Nie et al [23] introduced TAL-TKGC to capture the influence of time information on quaternion and node structure information.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to the basic framework, researchers are continuously exploring new technologies and methods, such as attention mechanisms, multi-task learning, transfer learning, and semi-supervised learning [53], to improve the efficacy and generalization ability of the models. Furthermore, some language models, such as T5 [54] and GShard [55], have been proposed to integrate multiple tasks and languages during pre-training, which can further enhance the generalization ability and adaptability of models [56].…”
Section: What Was the First Novel Of The 2022mentioning
confidence: 99%