In the United Kingdom, the majority of national assessments involve human raters. The processes by which raters determine the scores to award are central to the assessment process and affect the extent to which valid inferences can be made from assessment outcomes. Thus, understanding rater cognition has become a growing area of research in the United Kingdom. This study investigated rater cognition in the context of the assessment of school‐based project work for high‐stakes purposes. Thirteen teachers across three subjects were asked to “think aloud” whilst scoring example projects. Teachers also completed an internal standardization exercise. Nine professional raters across the same three subjects standardized a set of project scores whilst thinking aloud. The behaviors and features attended to were coded. The data provided insights into aspects of rater cognition such as reading strategies, emotional and social influences, evaluations of features of student work (which aligned with scoring criteria), and how overall judgments are reached. The findings can be related to existing theories of judgment. Based on the evidence collected, the cognition of teacher raters did not appear to be substantially different from that of professional raters.
Despite the abundant literature on educational measurement there has been relatively little work investigating the psychological processes underpinning marking. This research investigated the processes involved when examiners mark examination responses. Scripts from two geography A-level examinations were used: one requiring short and medium length responses and one requiring essays. Six examiners marked 50 scripts from each of the two examinations and were later asked to think aloud whilst marking four to six scripts from each examination. Coding and analyses identified different types of reading behaviours, social, emotional and personal reactions and provided insight into the nature of evaluations. Some differences between examiners and between question types were identified. Analysis of associations between marker behaviours and marker agreement suggested that positive evaluations, comparisons and thorough reading were important to avoiding severity. Potential implications for marker training and for the impact of technological changes to assessment systems are discussed.
The process by which an assessor evaluates a piece of student work against a set of marking criteria is somewhat hidden and potentially complex. This judgement process are under-researched, particularly in contexts where teachers (rather than trained examiners) conduct the assessment and in contexts involving extended pieces of work. This paper reports research which explored the judgement processes involved when teachers mark General Certificate of Secondary Education (GCSE) coursework. Thirteen teachers across three subjects were interviewed about aspects of their marking judgements. In addition, 378 teachers across a wider range of subjects completed an associated questionnaire. The data provide insights into the way that criteria are used, the role that comparison plays in the process and the importance of various professional experiences to making assessment judgements. Findings are likely to generalise to 'controlled assessments' which have replaced coursework in the GCSE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.