Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) 2022
DOI: 10.18653/v1/2022.bea-1.26
|View full text |Cite
|
Sign up to set email alerts
|

Educational Multi-Question Generation for Reading Comprehension

Abstract: Automated question generation has made great advances with the help of large NLP generation models. However, typically only one question is generated for each intended answer. We propose a new task, Multi-Question Generation, aimed at generating multiple semantically similar but lexically diverse questions assessing the same concept. We develop an evaluation framework based on desirable qualities of the resulting questions. Results comparing multiple question generation approaches in the two-question generatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…To meet the needs of various domain-specific scenarios, a study has been proposed to generate diverse questions, focusing on different expressions while conveying the same meaning, given an answer and its relevant context. Currently, this research can be categorized into two main classes: enhancing DQG by utilizing internal or external knowledge [10][11][12] and improving the quality of DQG by refining the decoding process [9,13]. In the realm of leveraging internal or external knowledge to enhance DQG, Wang et al [10] introduced a novel content selector that utilizes the continuous latent variable modeling technique of a Conditional Variational Encoder (CVE).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…To meet the needs of various domain-specific scenarios, a study has been proposed to generate diverse questions, focusing on different expressions while conveying the same meaning, given an answer and its relevant context. Currently, this research can be categorized into two main classes: enhancing DQG by utilizing internal or external knowledge [10][11][12] and improving the quality of DQG by refining the decoding process [9,13]. In the realm of leveraging internal or external knowledge to enhance DQG, Wang et al [10] introduced a novel content selector that utilizes the continuous latent variable modeling technique of a Conditional Variational Encoder (CVE).…”
Section: Related Workmentioning
confidence: 99%
“…To evaluate the proposed model, we adopted the PL-DQG dataset created by Rathod et al [9]. This dataset is derived from the publicly available SQuAD dataset [14], with questions transformed into a second set by pre-trained language models.…”
Section: Experiments 41 Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…A much greater effort was placed into generating diverse and hard questions. High-quality questions were generated based on the pre-trained language models (Wang et al, 2018 ; Kumar et al, 2019 ; Pan et al, 2020 ; Cheng et al, 2021 ) and, in particular, for educational purposes (Stasaski et al, 2021 ; Rathod et al, 2022 ; Zou et al, 2022 ). In addition, for multiple-choice questions, distractor generation also received due attention (Liu et al, 2005 ; Susanti et al, 2018 ; Gao et al, 2019 ; Qiu et al, 2021 ; Ren and Zhu, 2021 ; Zhang and VanLehn, 2021 ).…”
Section: Reading-related Technologies In the Age Of Deep Learningmentioning
confidence: 99%