2019
DOI: 10.1609/aaai.v33i01.3301142
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Background Knowledge in Compact Answer Generation for Why-Questions

Abstract: This paper proposes a novel method for generating compact answers to open-domain why-questions, such as the following answer, “Because deep learning technologies were introduced,” to the question, “Why did Google’s machine translation service improve so drastically?” Although many works have dealt with why-question answering, most have focused on retrieving as answers relatively long text passages that consist of several sentences. Because of their length, such passages are not appropriate to be read aloud by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Our approach builds on top of this line of work by designing and testing generative models for AS2-based QA systems. In recent years, the use of generative approaches has been evaluated for other QA tasks, such as machine reading (MR) (Izacard and Grave, 2021;Lewis et al, 2020b) and question-based summarization (QS) (Iida et al, 2019;Goodwin et al, 2020;Deng et al, 2020). However, while related, these efforts are fundamentally different from the experimental setting described in this paper.…”
Section: Introductionmentioning
confidence: 99%
“…Our approach builds on top of this line of work by designing and testing generative models for AS2-based QA systems. In recent years, the use of generative approaches has been evaluated for other QA tasks, such as machine reading (MR) (Izacard and Grave, 2021;Lewis et al, 2020b) and question-based summarization (QS) (Iida et al, 2019;Goodwin et al, 2020;Deng et al, 2020). However, while related, these efforts are fundamentally different from the experimental setting described in this paper.…”
Section: Introductionmentioning
confidence: 99%
“…Fan et al (2019) propose a multi-task Seq2Seq model with the concatenation of the question and support documents to generate long-form answers. Iida et al (2019) and Nakatsuji and Okui (2020) incorporate some background knowledge into Seq2Seq model for why questions and conclusion-centric questions. Some latest works (Feldman and El-Yaniv, 2019;Yadav et al, 2019;Nishida et al, 2019a) attempt to provide evidence or justifications for humanunderstandable explanation of the multi-hop inference process in factoid QA, where the inferred evidences are only treated as the middle steps for finding the answer.…”
Section: Related Workmentioning
confidence: 99%
“…On top of BASE, it additionally used real-representation generator R to encode compact answers, which were generated by the compactanswer generator of Iida et al (2019). R was trained alongside the why-QA model using W hySet and the compact-answer generator was pre-trained with CmpAns.…”
Section: Base+cansmentioning
confidence: 99%
“…On top of BASE, it additionally used the encoder in the compact-answer generator of Iida et al (2019) to create compact-answer representation.…”
Section: Base+cencmentioning
confidence: 99%
See 1 more Smart Citation