Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016
DOI: 10.18653/v1/p16-1220
|View full text |Cite
|
Sign up to set email alerts
|

Question Answering on Freebase via Relation Extraction and Textual Evidence

Abstract: Existing knowledge-based question answering systems often rely on small annotated training data. While shallow methods like relation extraction are robust to data scarcity, they are less expressive than the deep meaning representation methods like semantic parsing, thereby failing at answering questions involving multiple constraints. Here we alleviate this problem by empowering a relation extraction method with additional evidence from Wikipedia. We first present a neural network based relation extractor to r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
182
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 255 publications
(182 citation statements)
references
References 37 publications
0
182
0
Order By: Relevance
“…A triple e 1 , r, e 2 is called a relation instance and we refer to the relation of the target entity pair as target relation. Relation extraction is a fundamental task that enables a wide range of semantic applications from question answering (Xu et al, 2016) to fact checking (Vlachos and Riedel, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…A triple e 1 , r, e 2 is called a relation instance and we refer to the relation of the target entity pair as target relation. Relation extraction is a fundamental task that enables a wide range of semantic applications from question answering (Xu et al, 2016) to fact checking (Vlachos and Riedel, 2014).…”
Section: Introductionmentioning
confidence: 99%
“…We use the detected topic entity mentions to obtain candidate matching entities in the KB using Freebase Search API. We use top-Model F1 (Berant et al, 2013) 35.7 (Yao and Van Durme, 2014) 33.0 (Berant and Liang, 2014) 39.9 (Bao et al, 2014) 37.5 (Bordes et al, 2014) 39.2 (Yang et al, 2014) 41.3 (Dong et al, 2015b) 40.8 (Yao, 2015) 44.3 (Berant and Liang, 2015) 49.7 52.5 50.3 (Xu et al, 2016) 53 3 entities returned for the pruning step of Question Abstraction on the test examples. Answer Type Prediction.…”
Section: Methodsmentioning
confidence: 99%
“…By working against a knowledge graph, crisp entities can be returned as answers. By exploiting the structure provided by the knowledge graph and extracting relationship between entities, one can also answer complex questions that require multiple joins, corresponding to paths in the knowledge graph [6,13,14]. In these cases, a knowledge graph can be used to return answers that are proper tuples of entities, rather than singletons.…”
Section: Related Workmentioning
confidence: 99%