Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the judgements made about it, employing qualitative and quantitative data-gathering approaches to focus on the annotation practices of readers who are assessing extended essays, and to explore whether this might be affected by mode. The project also gathers evidence about the spatial encoding ability of these readers to suggest that mode-related influences on annotation might influence the ability of readers to comprehend fully the material being read. Examiners had less spatial awareness about the location of script features on screen than on paper; however, annotating on screen might link to both the provision of appropriate annotations and the relative ease with which these are deployed whilst reading. There was also evidence that on-screen annotation could contribute to the development of good mental representations of texts on screen.
IntroductionThere is a growing body of research literature that considers how the mode of assessment, either computer-or paper-based, might affect students' performances (Paek 2005). Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor behaviour when dealing with extended written essay answers in different modes. This might be considered surprising since research literature from domains such as ergonomics, human factors, human-computer interaction, and the psychology of reading suggests that the mode in which longer texts are read might be expected to influence the way that readers access and comprehend such texts (324 M. Johnson and R. Nádas ability of these readers to suggest that mode-related influences on annotation might influence the ability of readers to fully comprehend the material being read. This issue has particular relevance for the domain of educational assessment where there are moves towards increased digital script marking, as annotation has a clear formal quality assurance function within large scale assessment systems. What is less clearly articulated is the informal utility of annotations for assessors whilst comprehension building, a function that carries implications beyond assessment and into the realms of human cognition, cognitive load and the psychology of reading.
BackgroundBennett (2002) describes the rapid growth of computer technology use in workplaces and education as inexorable. Although technology offers the potential to broaden educational assessment beyond what traditional methods allow, there are inevitable concerns during a transition phase (where assessments exist in both paper-and computer-based modes) that their outcomes are not comparable. In her review of comparability studies since 1993, Paek (2005 ...