Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop 2022
DOI: 10.18653/v1/2022.acl-srw.27
|View full text |Cite
|
Sign up to set email alerts
|

MEKER: Memory Efficient Knowledge Embedding Representation for Link Prediction and Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…In Table 4 for SimpleQuestions-Wikidata, results for KEQA and Text2Graph are taken from MEKER [6]. They evaluate both systems on a smaller split of the SimpleQuestions-Wikidata test set.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In Table 4 for SimpleQuestions-Wikidata, results for KEQA and Text2Graph are taken from MEKER [6]. They evaluate both systems on a smaller split of the SimpleQuestions-Wikidata test set.…”
Section: Discussionmentioning
confidence: 99%
“…For simple questions, KEQA [17] targets at jointly recovering the question's head entity, predicate, and tail entity representations in the KG embedding spaces and then forming a query to retrieve the answer from a KG. Text2Graph [6] uses KEQA as a base, and improves on the embedding learning model by utilising CP tensor decomposition [15]. We include both these systems in our evaluation Table 4.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation