Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications 2015
DOI: 10.3115/v1/w15-0612
|View full text |Cite
|
Sign up to set email alerts
|

Identifying Patterns For Short Answer Scoring Using Graph-based Lexico-Semantic Text Matching

Abstract: Short answer scoring systems typically use regular expressions, templates or logic expressions to detect the presence of specific terms or concepts among student responses. Previous work has shown that manually developed regular expressions can provide effective scoring, however manual development can be quite time consuming. In this work we present a new approach that uses word-order graphs to identify important patterns from humanprovided rubric texts and top-scoring student answers. The approach also uses s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(43 citation statements)
references
References 17 publications
0
43
0
Order By: Relevance
“…Our research work is closely related to the work of [12,13]. In Tandalla's approach, multi-features, including RE from text, are extracted and trained on RF and GBM.…”
Section: Related Work and Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Our research work is closely related to the work of [12,13]. In Tandalla's approach, multi-features, including RE from text, are extracted and trained on RF and GBM.…”
Section: Related Work and Literature Reviewmentioning
confidence: 99%
“…The recent approaches of features extractions adopted by the research for essay scoring are regular expressions and semantic evaluation e.g., Tandalla and Rodrigues use the regular expressions as features, as described in Section 3.3.2, and Ramachandran's approach [13] is extracting text patterns containing content tokens and text patterns containing sentence-structure information with the attributes of semantics.…”
Section: Literature Reviewmentioning
confidence: 99%
“…We choose learners that (a) have either been shown to perform well with feature sets comparable to ours in previously published work - Mohler et al (2011), Sakaguchi et al (2015, and Zesch et al (2015) all used support vector machines; Ramachandran et al (2015) use a random forest regressor -or (b) are generally known to perform well with a large number of sparse features (Hastie et al, 2001;Fan et al, 2008;Chang and Lin, 2011). We use the scikit-learn (Pedregosa et al, 2011) implementations for all learners.…”
Section: Learnersmentioning
confidence: 99%
“…(b) response-based which use a large number of detailed features extracted from the student responses themselves (e.g., word ngrams, etc.) and human scores assigned to the responses to learn a supervised machine-learning model (Mohler et al, 2011;Dzikovska et al, 2013;Ramachandran et al, 2015;Zesch et al, 2015;Zhu et al, 2016). Response-based approaches generally require training a separate model for each question.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast, scoring for content deals with responses to openended questions designed to test what the student knows, has learned, or can do in a specific subject area (such as Computer Science, Math, or Biology) (Sukkarieh and Stoyanchev, 2009;Sukkarieh, 2011;Mohler et al, 2011;Dzikovska et al, 2013;Ramachandran et al, 2015;Sakaguchi et al, 2015;Zhu et al, 2016). 2 In order to measure the content of the spoken responses in our data, we extract the following set of features from the 1-best ASR hypotheses for each response:…”
Section: Text-driven Featuresmentioning
confidence: 99%