Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1133
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Parsing via Paraphrasing

Abstract: A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
437
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 477 publications
(437 citation statements)
references
References 14 publications
0
437
0
Order By: Relevance
“…Different from our work using a semantic space defined by knowledge base, the hidden state connecting the source and target RNNs is a vector of implicit and inexplicable real numbers. Learning the semantic information from a sentence, which is also called semantic grounding, is widely used for question answering tasks (Liang et al, 2011;Berant et al, 2013;Bao et al, 2014;Berant and Liang, 2014). In (Yih et al, 2015), with a deep convolutional neural network (CNN), the question sentence is mapped into a query graph, based on which the answer is searched in knowledge base.…”
Section: Related Workmentioning
confidence: 99%
“…Different from our work using a semantic space defined by knowledge base, the hidden state connecting the source and target RNNs is a vector of implicit and inexplicable real numbers. Learning the semantic information from a sentence, which is also called semantic grounding, is widely used for question answering tasks (Liang et al, 2011;Berant et al, 2013;Bao et al, 2014;Berant and Liang, 2014). In (Yih et al, 2015), with a deep convolutional neural network (CNN), the question sentence is mapped into a query graph, based on which the answer is searched in knowledge base.…”
Section: Related Workmentioning
confidence: 99%
“…We use the detected topic entity mentions to obtain candidate matching entities in the KB using Freebase Search API. We use top-Model F1 (Berant et al, 2013) 35.7 (Yao and Van Durme, 2014) 33.0 (Berant and Liang, 2014) 39.9 (Bao et al, 2014) 37.5 (Bordes et al, 2014) 39.2 (Yang et al, 2014) 41.3 (Dong et al, 2015b) 40.8 (Yao, 2015) 44.3 (Berant and Liang, 2015) 49.7 52.5 50.3 (Xu et al, 2016) 53 3 entities returned for the pruning step of Question Abstraction on the test examples. Answer Type Prediction.…”
Section: Methodsmentioning
confidence: 99%
“…in QA [Dong et al, 2017], semantic parsing [Berant and Liang, 2014]. Our paraphrase model is unique since our neural generator produces paraphrases having words within the training vocabulary of the base parser, unlike other models.…”
Section: Paraphrase Generation Modelmentioning
confidence: 99%
“…This enables us to improve any existing trained and deployed parser without having to retrain them. In ParaSampre [Berant and Liang, 2014] multiple paraphrases are generated from candidate logical forms and sideinformation from a KB. The final logical form is selected by measuring similarity between the query and the candidate paraphrases.…”
Section: Paraphrase Generation Modelmentioning
confidence: 99%