Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.345
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Entity Resolution and Question Answering Using Differentiable Knowledge Graphs

Abstract: Recently, end-to-end (E2E) trained models for question answering over knowledge graphs (KGQA) have delivered promising results using only a weakly supervised dataset. However, these models are trained and evaluated in a setting where hand-annotated question entities are supplied to the model, leaving the important and non-trivial task of entity resolution (ER) outside the scope of E2E learning. In this work, we extend the boundaries of E2E learning for KGQA to include the training of an ER component. Our model… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…For datasets, we use WebQSP (Yih et al, 2016) that is modified from Berant et al (2013) to filter out unanswerable questions, and Mintaka (Sen et al, 2022). Further, for the knowledge source, we use Wikidata which includes billions of facts that are represented as the triplet: (subject, relation, object), and we follow the standard preprocessing setup for KGQA (Saffari et al, 2021;Baek et al, 2023).…”
Section: Open-domain Question Answeringmentioning
confidence: 99%
“…For datasets, we use WebQSP (Yih et al, 2016) that is modified from Berant et al (2013) to filter out unanswerable questions, and Mintaka (Sen et al, 2022). Further, for the knowledge source, we use Wikidata which includes billions of facts that are represented as the triplet: (subject, relation, object), and we follow the standard preprocessing setup for KGQA (Saffari et al, 2021;Baek et al, 2023).…”
Section: Open-domain Question Answeringmentioning
confidence: 99%
“…These QA systems are designed to answer questions from datasets such as Natural Questions (Kwiatkowski et al, 2019). The knowledge needed to answer questions can be in pre-trained models , knowledge-graphs (KGs) (Lin et al, 2019;Feng et al, 2020;Lv et al, 2020;Saffari et al, 2021) or document collections (Chen et al, 2017;Izacard and Grave, 2021;Guu et al, 2020;. In retrieval-based systems, differential retrieval can be combined with extractive question answering, as in REALM (Guu et al, 2020) and ORQA , as well as with generative answer generation, as in RAG .…”
Section: Related Workmentioning
confidence: 99%
“…Hence, annotation in RE is the process of chosing relations between head and tail entities in the context from a collection of relation types. Relation Extraction is an indispensable component for composing the knowledge graphs Li et al (2019) that are useful to various downstream NLP applications such as question answering (Dubey, 2021;Saffari et al, 2021;Sen et al, 2021) and dialogue system (Liu et al, 2021b;Chaudhuri et al, 2021;Gao et al, 2021a).…”
Section: Re-annotation In Relation Extractionmentioning
confidence: 99%