2020
DOI: 10.48550/arxiv.2004.02143
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reinforced Multi-task Approach for Multi-hop Question Generation

Abstract: Question generation (QG) attempts to solve the inverse of question answering (QA) problem by generating a natural language question given a document and an answer. While sequence to sequence neural models surpass rule-based systems for QG, they are limited in their capacity to focus on more than one supporting fact. For QG, we often require multiple supporting facts to generate high-quality questions. Inspired by recent works on multihop reasoning in QA, we take up Multi-hop question generation, which aims at … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…We focus on text-based open-domain QA system, which first uses information retrieval (IR) to select passages, then applies a machine reading comprehension (MRC) model to extract answers from the passage pool. Since Chen et al (2017) built the first end-to-end system to tackle machine reading at scale, the IR phase has been investigated extensively (Wang et al 2018a;Kratzwald and Feuerriegel 2018;Lee et al 2018;Das et al 2019;Lee, Chang, and Toutanova 2019;Guu et al 2020). Meanwhile, MRC has also achieved great success in large-scale QA datasets, such as SQuAD (Rajpurkar et al 2016), especially after the advent of BERT (Devlin et al 2019).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We focus on text-based open-domain QA system, which first uses information retrieval (IR) to select passages, then applies a machine reading comprehension (MRC) model to extract answers from the passage pool. Since Chen et al (2017) built the first end-to-end system to tackle machine reading at scale, the IR phase has been investigated extensively (Wang et al 2018a;Kratzwald and Feuerriegel 2018;Lee et al 2018;Das et al 2019;Lee, Chang, and Toutanova 2019;Guu et al 2020). Meanwhile, MRC has also achieved great success in large-scale QA datasets, such as SQuAD (Rajpurkar et al 2016), especially after the advent of BERT (Devlin et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Later on, some studies proposed to leverage direct or indirect information in paragraphs for QG (Du and Cardie 2018;Liu et al 2019a;Song et al 2018;Zhao et al 2018). To improve "shallow" questions that confine to a single sentence, deep QG has been explored (Pan et al 2020;Gupta et al 2020). More recently, large-scale language model pretraining has been applied to QG (Dong et al 2019;Bao et al 2020;Xiao et al 2020), which achieved state-of-the-art performance with significant improvement.…”
Section: Related Workmentioning
confidence: 99%