2005
DOI: 10.1207/s15434311laq0203_1
|View full text |Cite
|
Sign up to set email alerts
|

Individual Feedback to Enhance Rater Training: Does It Work?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
54
0

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 66 publications
(56 citation statements)
references
References 11 publications
2
54
0
Order By: Relevance
“…Fourth, rating context; which includes the medium and process through which responses are distributed to raters, rater training procedures, rater monitoring and feedback practices, and temporal and physical features of the rating environment; also influences the quality of ratings. Several studies have supported the notion that rater training increases the quality of ratings (Shohamy et al, 1992;Sweedler-Brown, 1985) and that the self-pacing that is afforded by an online training module increases the efficiency of that training process (Wolfe, Matthews, & Vickers, 2010;Elder, Barkhuizen, Knoch, & von Randow, 2007;Elder, Knoch, Barkhuizen, & von Randow, 2005;Knoch, Read, & von Randow, 2007). Less is known about the contribution to rating quality of response distribution systems, rater monitoring and feedback processes, and the setting within which ratings are assigned.…”
Section: Influences On Rater Agreement and Accuracymentioning
confidence: 99%
“…Fourth, rating context; which includes the medium and process through which responses are distributed to raters, rater training procedures, rater monitoring and feedback practices, and temporal and physical features of the rating environment; also influences the quality of ratings. Several studies have supported the notion that rater training increases the quality of ratings (Shohamy et al, 1992;Sweedler-Brown, 1985) and that the self-pacing that is afforded by an online training module increases the efficiency of that training process (Wolfe, Matthews, & Vickers, 2010;Elder, Barkhuizen, Knoch, & von Randow, 2007;Elder, Knoch, Barkhuizen, & von Randow, 2005;Knoch, Read, & von Randow, 2007). Less is known about the contribution to rating quality of response distribution systems, rater monitoring and feedback processes, and the setting within which ratings are assigned.…”
Section: Influences On Rater Agreement and Accuracymentioning
confidence: 99%
“…Raters' reactions to the program were also mixed, some being positive with others offering suggestions for improving the program or expressing preference for the face-to-face training. Elder et al (2005) examined the effects of individual rater performance feedback on the scoring of the same writing test employed in Elder et al (2007). The eight raters began by scoring 100 Diagnostic English Language Needs Assessment (DELNA) scripts, received online training, and then scored 50 randomly sampled DELNA writing scripts.…”
Section: Effect Of Rater Training On Scoring Performancementioning
confidence: 99%
“…Three of the five studies reviewed above explored the effects of a conventional training method or procedure, with the exception of Elder et al (2005) and Knoch et al (2007), which looked into the use of customized rater feedback. Weigle (1998) used a pre-post-training design with a single rater group, and Shohamy et al (1992) employed a study design that involved an experimental group and a control group.…”
Section: Effect Of Rater Training On Scoring Performancementioning
confidence: 99%
“…Others have written about or investigated what factors do (and do not) contribute to reliable marking -for example, the use of exemplar work (Sadler 1987;Wolf 1995;Baird, Greatorex, and Bell 2004), discussion between assessors (Black et al 1989;Wolf 1995), discussion and consensus between *Corresponding author. Email: greatorex.j@cambridgeassessment.org.uk Downloaded by [Florida State University] at 14:19 26 December 2014 examiners (Baird, Greatorex, and Bell 2004), feedback on marking performance (Wigglesworth 1993;Shaw 2002;Elder et al 2005), training (Lunz, Wright, and Linacre 1990;Shohamy, Gordon, and Kraemer 1992), experience (Ruth and Murphy 1988;Weigle 1999;Shohamy, Gordon, and Kraemer 1992), contrast with previous candidates' performances (Spear 1997), which item or question an examiner is marking (Wiseman and Wrigley 1958;Black 1962) and examiners' personality characteristics (Branthwaite, Trueman, and Berrisford 1981;Greatorex and Bell 2004). This list is not intended to be exhaustive.…”
Section: Introductionmentioning
confidence: 99%
“…This list is not intended to be exhaustive. Some of this literature relates to GCE or GCSE marking (for example, Greatorex and Bell 2004) but some is about marking in other contexts -for example, university writing assessment programmes (Elder et al 2005). Although there is a great deal of research about the reliability of marking there are still some remaining research questions related to the GCSE and GCE context.…”
Section: Introductionmentioning
confidence: 99%