Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.452
|View full text |Cite
|
Sign up to set email alerts
|

Answer-driven Deep Question Generation based on Reinforcement Learning

Abstract: Deep question generation (DQG) aims to generate complex questions through reasoning over multiple documents. The task is challenging and underexplored. Existing methods mainly focus on enhancing document representations, with little attention paid to the answer information, which may result in the generated question not matching the answer type and being answerirrelevant. In this paper, we propose an Answer-driven Deep Question Generation (ADDQG) model based on the encoder-decoder framework. The model makes be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…Going forward, we will explore incorporating ArtQuest's QG capabilities to enhance our Arthena chatbot and conduct user studies to evaluate it in operation. Given our low-resource context, we also would like to study some recent approaches such as data augmentation (Alberti et al 2019), few-shot learning (Lewis, Denoyer, and Riedel 2019;Chen et al 2020) and reinforcement learning using feedback from user studies (Wang et al 2020b).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Going forward, we will explore incorporating ArtQuest's QG capabilities to enhance our Arthena chatbot and conduct user studies to evaluate it in operation. Given our low-resource context, we also would like to study some recent approaches such as data augmentation (Alberti et al 2019), few-shot learning (Lewis, Denoyer, and Riedel 2019;Chen et al 2020) and reinforcement learning using feedback from user studies (Wang et al 2020b).…”
Section: Discussionmentioning
confidence: 99%
“…We refer our readers to recent survey articles for an overview on the challenges, existing approaches, as well as evaluation metrics for the question generation and reading comprehension tasks (Pan et al 2019;Zeng et al 2020). For QG specifically, building on early research with attention and the basic encoder-decoder setup (Zhou et al 2018), recent works have started exploring transformers (Chan and Fan 2019), variational encoders (Lee et al 2020), reinforcement learning (Wang et al 2020b), semantic information (Pan et al 2020) and future n-gram prediction (Qi et al 2020). However, research addressing content selection and answer unaware QG is still preliminary with some previous works employing supervision for training an answer span selection module alongside QG (Du and Cardie Subramanian et al 2018) or by simply treating noun phrases and named entities as potential answer cues for QG (Lewis, Denoyer, and Riedel 2019;Kumar et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Human evaluation is still the most reliable way to compare generative models for diverse tasks like question generation. Common categories for question generation to consider are grammar, difficulty, answerability and fluency (Nema et al, 2019;Tuan et al, 2019;Wang et al, 2020b;. However, not all of these categories are relevant to clinical question generation.…”
Section: Human Evaluationmentioning
confidence: 99%
“…Human evaluation is still the most reliable way to compare generative models for diverse tasks like question generation. Common categories for question generation to consider are grammar, difficulty, answerability and fluency (Nema et al, 2019;Tuan et al, 2019;Wang et al, 2020b;Huang et al, 2021). However, not all of these categories are relevant to clinical question generation.…”
Section: Human Evaluationmentioning
confidence: 99%