2008
DOI: 10.1002/j.2333-8504.2008.tb02107.x
|View full text |Cite
|
Sign up to set email alerts
|

EFFECT OF IMMEDIATE FEEDBACK AND REVISION ON PSYCHOMETRIC PROPERTIES OF OPEN‐ENDED GRE® SUBJECT TEST ITEMS

Abstract: Registered examinees for the GRE® Subject Tests in Biology and Psychology participated in a Web‐based experiment where they answered open‐ended questions that required a short answer of 1‐3 sentences. Responses were automatically scored by natural language processing methods (the c‐rater™ scoring engine) immediately after participants submitted their responses. Based on natural language processing methods (the c‐rater scoring engine), participants received immediate feedback on the correctness of their answers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0
2

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 21 publications
0
9
0
2
Order By: Relevance
“…AWE programs are promoted as effective enhancers of process writing instruction, compelling guides for student revision, and robust vehicles of consistent writing and evaluation across the curriculum. They are also presumed to motivate multiple drafting and revision, foster learner autonomy, and enhance the instructional dynamic by supporting the drive toward individualized instruction (Attali & Powers, 2008;Burstein, 2012).…”
Section: Objections To Awementioning
confidence: 99%
“…AWE programs are promoted as effective enhancers of process writing instruction, compelling guides for student revision, and robust vehicles of consistent writing and evaluation across the curriculum. They are also presumed to motivate multiple drafting and revision, foster learner autonomy, and enhance the instructional dynamic by supporting the drive toward individualized instruction (Attali & Powers, 2008;Burstein, 2012).…”
Section: Objections To Awementioning
confidence: 99%
“…Automated scoring has been widely applied in educational research to improve scoring efficiency and shorten the time between test administration and when teachers, test takers, and score users receive test results. Research on automated scoring has covered domains such as writing quality (Burstein & Marcu, ; Foltz, Laham, & Landauer, ), mathematics (Bennett & Sebrechts, ; Sandene, Horkay, Bennett, Braswell, & Oranje, ), written content (Attali & Powers, ; Dzikovska, Nielsen, & Brew, ; Graesser, ; Leacock & Chodorow, ; Mitchell, Russell, Broomhead, & Aldridge, ; Nielsen, Ward, & Martin, ; Sukkarieh & Bolge, ), speech (Bernstein, Van Moere, & Cheng, ; Higgins, Zechner, Xi, & Williamson, ), and other education related topics (Sargeant, Wood, & Anderson, ).…”
Section: Research On Content‐based Automated Scoring In Educational Cmentioning
confidence: 99%
“…Over the last two decades, automated scoring has been widely developed and used in a variety of content domains, such as mathematics (Bennett & Sebrechts, ; Sandene, Horkay, Bennett, Braswell, & Oranje, ), science (Linn et al, ; Liu et al, ; Nehm, Ha, & Mayfield, ), and language testing (Bernstein, Van Moere, & Cheng, ; Higgins, Zechner, Xi, & Williamson, ), to name a few. Furthermore, in assessing written responses across content domains, automated scoring has been used to evaluate rubric dimensions, such as content (Attali & Powers, ; Dzikovska et al, ; Leacock & Chodorow, ; Mitchell, Russell, Broomhead, & Aldridge, ; Nielsen, Ward, & Martin, ; Sukkarieh & Bolge, ) and quality (Burstein & Marcu, ; Foltz, Laham, & Landauer, ). The accuracy of automated scores depends on a number of factors, including the content domain, the complexity of the tasks, the levels of the scoring rubrics, and the number of responses available to build the automated scoring models.…”
Section: Introductionmentioning
confidence: 99%