Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1014
|View full text |Cite
|
Sign up to set email alerts
|

Creating Causal Embeddings for Question Answering with Minimal Supervision

Abstract: A common model for question answering (QA) is that a good answer is one that is closely related to the question, where relatedness is often determined using generalpurpose lexical models such as word embeddings. We argue that a better approach is to look for answers that are related to the question in a relevant way, according to the information need of the question, which may be determined through task-specific embeddings. With causality as a use case, we implement this insight in three steps. First, we gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
62
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 41 publications
(63 citation statements)
references
References 28 publications
1
62
0
Order By: Relevance
“…Causal word embeddings (Sharp et al, 2016) were proposed for representing the causal associations between words. Sharp et al (2016) cre-ated a set of cause-effect word pairs by paring each content word in a cause part with each content word in an effect part of the same causality expression, such as "Volcanoes erupt because magma pushes through vents and fissures." In this work, we extracted 100 million causality expressions from 4-billion Japanese web pages using the causality recognizer of Oh et al (2013).…”
Section: Word Embeddingsmentioning
confidence: 99%
“…Causal word embeddings (Sharp et al, 2016) were proposed for representing the causal associations between words. Sharp et al (2016) cre-ated a set of cause-effect word pairs by paring each content word in a cause part with each content word in an effect part of the same causality expression, such as "Volcanoes erupt because magma pushes through vents and fissures." In this work, we extracted 100 million causality expressions from 4-billion Japanese web pages using the causality recognizer of Oh et al (2013).…”
Section: Word Embeddingsmentioning
confidence: 99%
“…For why-QA, several neural methods (Sharp et al 2016;Tan et al 2016;dos Santos et al 2016;Oh et al 2017) showed significant performance improvement over methods using conventional machine learning methods such as support vector machines (Girju 2003;Higashinaka and Isozaki 2008;Verberne et al 2011;Oh et al 2013;Oh et al 2016). Oh et al (2017) used causality expressions that were automatically extracted from the Web for why-QA, like our approach.…”
Section: Related Workmentioning
confidence: 99%
“…One such task is non-factoid Question Answering (QA), such as why-question answering (why-QA) and how to-question answering. Although many attempts have been made to develop highly accurate non-factoid QA methods (Girju 2003;Higashinaka and Isozaki 2008;Verberne et al 2011;Oh et al 2013;Oh et al 2016;Sharp et al 2016;Tan et al 2016;dos Santos et al 2016;Oh et al 2017), most developed methods retrieve from a text archive long text passages that contain real answers that are not suitable for dialog systems due to their lengths. Table 1 exemplifies a why-question and its answer passage retrieved Copyright c ⃝ 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org).…”
Section: Introductionmentioning
confidence: 99%
“…Another approach proposed is to look for answers that are related to the question in a relevant way, according to the information need of the question which may be determined through task-specific word embedding (i.e. different embedding for different types of questions) [27]. A study using a factual memory network, which learns to answer questions by extracting and reasoning over relevant facts from a knowledge base, is also presented.…”
Section: Proposed Solutions In Knowledge Base Question Answering Systemsmentioning
confidence: 99%