Proceedings of the 8th International Conference on Computer Supported Education 2016
DOI: 10.5220/0005775502670274
|View full text |Cite
|
Sign up to set email alerts
|

Item Difficulty Analysis of English Vocabulary Questions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 0 publications
0
16
0
Order By: Relevance
“…On the other hand, grammatical inconsistency between the stem and the correct option can confuse test takers who have the required knowledge and would have been likely to select the key otherwise. Providing different phrasing for the question text is also of importance, playing a role in keeping test Table 6 Features proposed for controlling the difficulty of generated questions Reference Feature Lin et al (2015) Feature-based similarity between key and distractors Singhal et al (2015aSinghal et al ( , b, 2016 Number and type of domain-objects involved Number and type of domain-rules involved User given scenarios Length of the solution Direct/indirect use of rules involved Susanti et al (2017aSusanti et al ( , b, 2015Susanti et al ( , 2016 Reading passage difficulty Contextual similarity between key and distractors Distractor word difficulty level Kumar (2015a, 2017a), Quality of hints (i.e. how much they reduce the answer space) takers engaged.…”
Section: Verbalisationmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, grammatical inconsistency between the stem and the correct option can confuse test takers who have the required knowledge and would have been likely to select the key otherwise. Providing different phrasing for the question text is also of importance, playing a role in keeping test Table 6 Features proposed for controlling the difficulty of generated questions Reference Feature Lin et al (2015) Feature-based similarity between key and distractors Singhal et al (2015aSinghal et al ( , b, 2016 Number and type of domain-objects involved Number and type of domain-rules involved User given scenarios Length of the solution Direct/indirect use of rules involved Susanti et al (2017aSusanti et al ( , b, 2015Susanti et al ( , 2016 Reading passage difficulty Contextual similarity between key and distractors Distractor word difficulty level Kumar (2015a, 2017a), Quality of hints (i.e. how much they reduce the answer space) takers engaged.…”
Section: Verbalisationmentioning
confidence: 99%
“…#Q = no of evaluated questions; #P = no of participants (whether S = student(s), E = expert(s), C = co-worker(s) or A = author(s)); avg. = average; NR = not reported in the paper; NC = not clear; NA = not applicable; * = not reported but calculated based on provided data; and (+) = refer to the paper for extra information about the results or the context of the study Reference International Journal of Artificial Intelligence in Education (2020) 30:121-204 Khodeir et al (2018Khodeir et al ( , 2018 math 25 4 E 82% questions were thought to be human-authored 3 categories (system-generated; human-generated; unsure) Susanti et al (2015Susanti et al ( , 2016Susanti et al ( , 2017a language 22 7 E 45% questions were thought to be human-authored binary choice (human-generate; machine-generated) language 69 364 C 67% questions were thought to be human-authored Overlap with human generated questions Shirude et al (2015) generic NR 12 E 63.15% types of questions generated by the human are covered by the generator NR Vinu and Kumar (2015a generic NR NR recall = 43% to 81%, precision = 72% to 93% (+) Liu et al (2017) language (RC) 600 3 E recall = 64%, precision = 69% (+) NR Jouault et al (2015aJouault et al ( , b, 2016aJouault et al ( , b, 2017 history 69 1 E 84% of the human-authored questions are covered by auto-generated questions coverages means that both questions cover the same knowledge…”
Section: Limitationsmentioning
confidence: 99%
“…More recently, Susanti et al (2016) conducted an investigation of several potential factors affecting item difficulty in the vocabulary questions used in TOEFL. All investigated factors are related to the components of the vocabulary questions, as presented in Fig.…”
Section: Method: Controlling Item Difficultymentioning
confidence: 99%
“…Trace et al (2015) used item and passage characteristics to determine the item difficulty of cloze questions2 across the test taker’s nationality and proficiency level. Other studies focused on vocabulary questions, as conducted by Hoshino and Nakagawa (2010), Beinborn et al (2014), and Susanti et al (2016). Beinborn et al (2014) worked on predicting the gap difficulty of the C-test3 using a combination of factors such as phonetic difficulty and text complexity, while Hoshino and Nakagawa (2010) and Susanti et al (2016) investigated factors affecting item difficulty on multiple-choice vocabulary questions.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation