2020
DOI: 10.48550/arxiv.2004.11026
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

QURIOUS: Question Generation Pretraining for Text Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…QG has progressed rapidly due to new datasets and model improvements. Many different QG models have been proposed, starting for simple vanilla Sequence to Sequence Neural Networks models (seq2seq) Yuan et al, 2017) to the more recent transformer-based models (Dong et al, 2019;Chan and Fan, 2019;Varanasi et al, 2020;Narayan et al, 2020;Bao et al, 2020). Some QG systems use manual linguistic features in their models (Harrison and Walker, 2018;Khullar et al, 2018;Liu et al, 2019a;Dhole and Manning, 2020), some consider how to select question-worthy content Li et al, 2019;Scialom et al, 2019;, and some systems explicitly model question types (Duan et al, 2017;Sun et al, 2018;Kang et al, 2019;.…”
Section: Related Workmentioning
confidence: 99%
“…QG has progressed rapidly due to new datasets and model improvements. Many different QG models have been proposed, starting for simple vanilla Sequence to Sequence Neural Networks models (seq2seq) Yuan et al, 2017) to the more recent transformer-based models (Dong et al, 2019;Chan and Fan, 2019;Varanasi et al, 2020;Narayan et al, 2020;Bao et al, 2020). Some QG systems use manual linguistic features in their models (Harrison and Walker, 2018;Khullar et al, 2018;Liu et al, 2019a;Dhole and Manning, 2020), some consider how to select question-worthy content Li et al, 2019;Scialom et al, 2019;, and some systems explicitly model question types (Duan et al, 2017;Sun et al, 2018;Kang et al, 2019;.…”
Section: Related Workmentioning
confidence: 99%
“…2 Related Work 2.1 Question generation in NLP Question Generation (QG) is an active research topic in NLP. It is explored as a standalone task (Heilman and Smith, 2009;Nema et al, 2019), as a pre-training task for language models (Narayan et al, 2020) and as a component in solutions for other textual tasks, such as question answering Puri et al, 2020), information retrieval (Mass et al, 2020;Gaur et al, 2021) and generation evaluation (Durmus et al, 2020;Honovich et al, 2021). There are two main directions to QG: template-based (Heilman and Smith, 2009;Lyu et al, 2021;Dhole and Manning, 2020) and neural-based, with the latter achieving state-of-the-art results Narayan et al, 2020).…”
Section: Vq 2 Amentioning
confidence: 99%
“…QG has progressed rapidly due to new datasets and model improvements. Many different QG models have been proposed, starting for simple vanilla Sequence to Sequence Neural Networks models (seq2seq) (Du et al, 2017;Zhou et al, 2017;Yuan et al, 2017) to the more recent transformer-based models (Dong et al, 2019;Chan and Fan, 2019;Varanasi et al, 2020;Narayan et al, 2020;Bao et al, 2020). Some QG systems use manual linguistic features in their models (Harrison and Walker, 2018;Khullar et al, 2018;Liu et al, 2019a;Dhole and Manning, 2020), some consider how to select question-worthy content (Du and Cardie, 2017;Scialom et al, 2019;, and some systems explicitly model question types (Duan et al, 2017;Sun et al, 2018;Kang et al, 2019;Zhou et al, 2019).…”
Section: Related Workmentioning
confidence: 99%