2014
DOI: 10.1007/978-3-319-07221-0_76
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Scoring of an Analytical Response-To-Text Assessment

Abstract: Abstract. In analytical writing in response to text, students read a complex text and adopt an analytic stance in their writing about it. To evaluate this type of writing at scale, an automated approach for Response to Text Assessment (RTA) is needed. With the long-term goal of producing informative feedback for students and teachers, we design a new set of interpretable features that operationalize the Evidence rubric of RTA. When evaluated on a corpus of essays written by students in grades 4-6, our results … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(31 citation statements)
references
References 11 publications
0
31
0
Order By: Relevance
“…Table 4: A sub-list of manually extracted a) topic words and b) specific expressions for three sample topics. They are manually provided by experts in (Rahimi et al, 2014). Some of the stop-words might have been removed from the expressions by experts.…”
Section: Experimental Tools and Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Table 4: A sub-list of manually extracted a) topic words and b) specific expressions for three sample topics. They are manually provided by experts in (Rahimi et al, 2014). Some of the stop-words might have been removed from the expressions by experts.…”
Section: Experimental Tools and Methodsmentioning
confidence: 99%
“…Some of the stop-words might have been removed from the expressions by experts. We compare results for models that extracted features from topical components with a baseline model which uses the top 500 unigrams as features (chosen based on a chi-squared feature selection method), and with an upper-bound model which is the best model reported in (Rahimi et al, 2014). The only difference between our model and the upper-bound model is that in our model the topical components were extracted automatically instead of manually.…”
Section: Experimental Tools and Methodsmentioning
confidence: 99%
See 3 more Smart Citations