Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1455
|View full text |Cite
|
Sign up to set email alerts
|

Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text

Abstract: Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. Specialized neural models have been developed for extracting answers from either text alone or Knowledge Bases (KBs) alone. In this paper we look at a more practical setting, namely QA over the combination of a KB and entitylinked text, which is appropriate when an incomplete KB is available with a large text corpus. Building on recent advances in graph representation learning we propose a novel m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
365
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 332 publications
(365 citation statements)
references
References 42 publications
0
365
0
Order By: Relevance
“…While templates continued to be a strong line of work due to its focus on interpretability and generalizability [1,2,4,7,35], a parallel thread has focused on neural methods driven by performance gains [15,20,31]. Newer trends include shifts towards more complex questions [19,21,34], and fusion of knowledge graphs and text [31,33]. However, none of these approaches can deal with incomplete questions in a conversational setting.…”
Section: Related Workmentioning
confidence: 99%
“…While templates continued to be a strong line of work due to its focus on interpretability and generalizability [1,2,4,7,35], a parallel thread has focused on neural methods driven by performance gains [15,20,31]. Newer trends include shifts towards more complex questions [19,21,34], and fusion of knowledge graphs and text [31,33]. However, none of these approaches can deal with incomplete questions in a conversational setting.…”
Section: Related Workmentioning
confidence: 99%
“…For 1-hop questions in MetaQA (which is identical to WikiMovies), our model is compara-ble to the state-of-the-art 6 . For the other three settings, the performance of our re-implementation is slightly worse than the results reported in by original GRAFT-Net paper (Sun et al, 2018); this is likely because we use a simpler retrieval module.…”
Section: Metaqamentioning
confidence: 57%
“…GRAFT-Net (Sun et al, 2018) supports multihop reasoning on both KBs and text by introducing a question subgraph built with facts and text, and uses a learned graph representation (Kipf and Welling, 2016;Li et al, 2016;Schlichtkrull et al, 2017;Scarselli et al, 2009) to perform the "reasoning" required to select the answer. We use the same representation and reasoning scheme as GRAFT-Net, but do not require that the entire graph be retrieved in a single step.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work we focus on the recent Narra-tiveQA (Kocisky et al, 2018) dataset that was designed not to be easy to answer and that requires a model to read narrative stories and answer questions about them. In terms of model architecture, previous work in reading comprehension and question answer- ing has focused on integrating external knowledge (linguistic and/or knowledge-based) into recurrent neural network models using Graph Neural Networks (Song et al, 2018), Graph Convolutional Networks (Sun et al, 2018;De Cao et al, 2019), attention (Das et al, 2017;Mihaylov and Frank, 2018;Bauer et al, 2018) or pointers to coreferent mentions (Dhingra et al, 2017).…”
Section: Introductionmentioning
confidence: 99%