2004
DOI: 10.1002/j.2333-8504.2004.tb01941.x
|View full text |Cite
|
Sign up to set email alerts
|

Automated Tools for Subject Matter Expert Evaluation of Automated Scoring

Abstract: As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this paper explores the potential utility of classification and regression trees (CART) and Kohonen self-organizing maps (SOM) as tools to facilitate subject matter expert (SME) examination o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2006
2006
2018
2018

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…CART (Breiman, Friedman, Olshen, & Stone, ) has been used previously in the context of automated scoring by Zechner, Higgins, Xi, and Williamson () for building and evaluating scoring models for the SpeechRater sm automated scoring service and by Williamson, Bejar, and Sax () as an automated tool to help subject matter experts (SMEs) evaluate the human and machine score discrepancies. As Williamson et al () noted, CART has been successfully used in prior research on classification problems in psychometrics (e.g., Sheehan, 1997, as cited in Williamson et al, , for proficiency scaling and diagnostic assessment; Bejar, Yepes‐Baraya, & Miller, 1997, as cited in Williamson et al, , for modeling rater cognition; Holland, Ponte, Crane, & Malberg, 1998, as cited in Williamson et al, , for computerized adaptive testing).…”
Section: Methodsmentioning
confidence: 99%
“…CART (Breiman, Friedman, Olshen, & Stone, ) has been used previously in the context of automated scoring by Zechner, Higgins, Xi, and Williamson () for building and evaluating scoring models for the SpeechRater sm automated scoring service and by Williamson, Bejar, and Sax () as an automated tool to help subject matter experts (SMEs) evaluate the human and machine score discrepancies. As Williamson et al () noted, CART has been successfully used in prior research on classification problems in psychometrics (e.g., Sheehan, 1997, as cited in Williamson et al, , for proficiency scaling and diagnostic assessment; Bejar, Yepes‐Baraya, & Miller, 1997, as cited in Williamson et al, , for modeling rater cognition; Holland, Ponte, Crane, & Malberg, 1998, as cited in Williamson et al, , for computerized adaptive testing).…”
Section: Methodsmentioning
confidence: 99%