Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications 2017
DOI: 10.18653/v1/w17-5015
|View full text |Cite
|
Sign up to set email alerts
|

Distractor Generation for Chinese Fill-in-the-blank Items

Abstract: This paper reports the first study on automatic generation of distractors for fill-inthe-blank items for learning Chinese vocabulary. We investigate the quality of distractors generated by a number of criteria, including part-of-speech, difficulty level, spelling, word co-occurrence and semantic similarity. Evaluations show that a semantic similarity measure, based on the word2vec model, yields distractors that are significantly more plausible than those generated by baseline methods.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
29
0

Year Published

2018
2018
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 30 publications
(29 citation statements)
references
References 15 publications
0
29
0
Order By: Relevance
“…Jiang and Lee 2017), POS(Soonklang and Muangon 2017;Susanti et al 2015; Satria and Tokunaga 2017a, b;Jiang and Lee 2017), or co-occurrence with the key(Jiang and Lee 2017). A dominant approach is the selection of distractors based on their similarity to the key, using different notions of similarity, such as syntax-based similarity (i.e.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Jiang and Lee 2017), POS(Soonklang and Muangon 2017;Susanti et al 2015; Satria and Tokunaga 2017a, b;Jiang and Lee 2017), or co-occurrence with the key(Jiang and Lee 2017). A dominant approach is the selection of distractors based on their similarity to the key, using different notions of similarity, such as syntax-based similarity (i.e.…”
mentioning
confidence: 99%
“…A dominant approach is the selection of distractors based on their similarity to the key, using different notions of similarity, such as syntax-based similarity (i.e. similar POS, similar letters); Satria and Tokunaga 2017a, b; Jiang and Lee 2017), feature-based similarity(Wita et al 2018;Majumder and Saha 2015; Patra and Saha 2018a, b;Alsubait et al 2016;Leo et al 2019), or contextual similarity(Afzal 2015; Kumar et al 2015a, b;Yaneva and et al 2018;Shah et al 2017;Jiang and Lee 2017). Some studies(Lopetegui et al 2015;Faizan and Lohmann 2018;Faizan et al 2017;…”
mentioning
confidence: 99%
“…However, there are some papers in which DM is used for each isolated application, for instance learner motivation (13% of papers), learning styles (8%), provide feedback for instructors (9%), detecting language anxiety (6%), predicting performance (14%), L2 orientations (8%), language reading comprehension (5%), and detecting grammar issues and assessment (7%). In this analysis, the DM applications most frequently used in the context of FLL are: Predicting performance (Linck et al, 2013;Seker, 2016;Swanson et al, 2016;Wang & Cheng, 2016;Whitehill & Movellan, 2018), learner motivation (Apple, Falout, & Hill, 2013;Li & Zhou, 2017;Saeed et al, 2014;Tajeddin & Moghadam, 2012), provide feedback for instructors (Coskun & Mutlu, 2017;Jiang & Lee, 2017;Kaoropthai, Natakuatoong, & Cooharojananone, 2016;Kieffer & Lesaux, 2012;Rodriguez & Shepard, 2013;Zhao et al, 2015), learning styles (Aslan et al, 2014;Farrington et al, 2015;Hamedi, Pishghadam, & Ghazanfari, 2016;Hsiao, Lan, Kao, & Li, 2017), detecting language anxiety (Baghaei & Ravand, 2015;Cakir & Solak, 2014;Guntzviller et al, 2016;Martin & Valdivia, 2017), and L2 orientations (Allen et al, 2014;Lou & Noels, 2017;Maqsood et al, 2016;Winke, 2013). Figures 8 and 9 show the correlation between the educational level in where the articles mentioned to have developed their proposal and the EDM methods and applications that has been used, respectively.…”
Section: Edm Methods Referencesmentioning
confidence: 99%
“…Taking these cri-terion into consideration, most existing methods for DG are based on various similarity measures. These include WordNet-based metrics (Mitkov and Ha, 2003), embedding-based similarities (Guo et al, 2016;Kumar et al, 2015;Jiang and Lee, 2017), n-gram co-occurrence likelihood (Hill and Simha, 2016), phonetic and morphological similarities (Pino and Eskenazi, 2009), structural similarities in an ontology (Stasaski and Hearst, 2017), a thesaurus (Sumita et al, 2005), context similarity (Pino et al, 2008), context-sensitive inference (Zesch and Melamud, 2014), and syntactic similarity (Chen et al, 2006). Then distractors are selected from a candidate distractor set based on a weighted combination of similarities, where the weights are determined by heuristics.…”
Section: Introductionmentioning
confidence: 99%