2022
DOI: 10.48550/arxiv.2205.01730
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Quiz Design Task: Helping Teachers Create Quizzes with Automated Question Generation

Abstract: Question generation (QGen) models are often evaluated with standardized NLG metrics that are based on n-gram overlap. In this paper, we measure whether these metric improvements translate to gains in a practical setting, focusing on the use case of helping teachers automate the generation of reading comprehension quizzes. In our study, teachers building a quiz receive question suggestions, which they can either accept or refuse with a reason. Even though we find that recent progress in QGen leads to a signific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…However, this work only evaluates questions at the individual question-level rather than at a quiz-level. • The work by Laban et al (2022) moves beyond questionlevel evaluations to a quiz writing task, similar to our work in this paper. The authors design a task where teachers make a quiz exclusively with candidate questions that were automatically generated.…”
Section: Introductionmentioning
confidence: 84%
“…However, this work only evaluates questions at the individual question-level rather than at a quiz-level. • The work by Laban et al (2022) moves beyond questionlevel evaluations to a quiz writing task, similar to our work in this paper. The authors design a task where teachers make a quiz exclusively with candidate questions that were automatically generated.…”
Section: Introductionmentioning
confidence: 84%