2017
DOI: 10.1186/s41039-017-0065-5
|View full text |Cite
|
Sign up to set email alerts
|

Controlling item difficulty for automatic vocabulary question generation

Abstract: The present study investigates the best factor for controlling the item difficulty of multiple-choice English vocabulary questions generated by an automatic question generation system. Three factors are considered for controlling item difficulty: (1) reading passage difficulty, (2) semantic similarity between the correct answer and distractors, and (3) the distractor word difficulty level. An experiment was conducted by administering machine-generated items to three groups of English learners. The groups were … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(21 citation statements)
references
References 20 publications
0
12
0
Order By: Relevance
“…Despite the growth in AQG, only 14 studies have dealt with difficulty. Eight of these studies focus on the difficulty of questions belonging to a particular domain, such as mathematical word problems (Wang and Su 2016;Khodeir et al 2018), geometry questions (Singhal et al 2016), vocabulary questions (Susanti et al 2017a), reading comprehension questions (Gao et al 2018), DFA problems (Shenoy et al 2016), code-tracing questions (Thomas et al 2019), and medical case-based questions Kurdi et al 2019). The remaining six focus on controlling the difficulty of non-domain-specific questions (Lin et al 2015;Alsubait et al 2016;Kurdi et al 2017;Faizan and Lohmann 2018;Faizan et al 2017;Seyler et al 2017;Kumar 2015a, 2017a;Vinu et al 2016;Kumar 2017b, 2015b).…”
Section: Difficultymentioning
confidence: 99%
See 3 more Smart Citations
“…Despite the growth in AQG, only 14 studies have dealt with difficulty. Eight of these studies focus on the difficulty of questions belonging to a particular domain, such as mathematical word problems (Wang and Su 2016;Khodeir et al 2018), geometry questions (Singhal et al 2016), vocabulary questions (Susanti et al 2017a), reading comprehension questions (Gao et al 2018), DFA problems (Shenoy et al 2016), code-tracing questions (Thomas et al 2019), and medical case-based questions Kurdi et al 2019). The remaining six focus on controlling the difficulty of non-domain-specific questions (Lin et al 2015;Alsubait et al 2016;Kurdi et al 2017;Faizan and Lohmann 2018;Faizan et al 2017;Seyler et al 2017;Kumar 2015a, 2017a;Vinu et al 2016;Kumar 2017b, 2015b).…”
Section: Difficultymentioning
confidence: 99%
“…Difficulty control was validated by checking agreement between predicted difficulty and expert prediction in Vinu and Kumar (2015b), Alsubait et al (2016), Seyler et al (2017), Khodeir et al (2018), and Leo et al (2019), by checking agreement between predicted difficulty and student performance in Alsubait et al (2016), Susanti et al (2017a), Lin et al (2015), Wang and Su (2016), Leo et al (2019), and Thomas et al (2019), by employing automatic solvers in Gao et al (2018), or by asking experts to complete a survey after using the tool (Singhal et al 2016). Expert reviews and mock exams are equally represented (seven studies each).…”
Section: Difficultymentioning
confidence: 99%
See 2 more Smart Citations
“…Cosine similarity is a term-based similarity measure baseline of similarity between two vectors of an inner product space that measures the cosine of the angle between them (Gomaa and Fahmy 2013). It has been widely used in several text semantic analysis tasks in Landauer and Dumais (1997); Mihalcea et al (2006); Cheng et al (2008); Susanti et al (2017). Jaccard similarity coefficient (Roussinov and Zhao 2003) is a statistical measure of the extent of overlapping between two vectors.…”
Section: Methodsmentioning
confidence: 99%