2012
DOI: 10.1080/13803611.2012.659932
|View full text |Cite
|
Sign up to set email alerts
|

Extended essay marking on screen: is examiner marking accuracy influenced by marking mode?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
1

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 17 publications
0
8
1
Order By: Relevance
“…Furthermore, reading on screen was found to be a more cognitively demanding task than reading on paper (Wästlund, Reinikka, Norlander & Archer, ) and such increased cognitive workload might result in lower levels of comprehension of a text (Mayes, Sims & Koonce, ). This effect was most significant when readers read longer texts than when they read short texts (Johnson et al , ).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, reading on screen was found to be a more cognitively demanding task than reading on paper (Wästlund, Reinikka, Norlander & Archer, ) and such increased cognitive workload might result in lower levels of comprehension of a text (Mayes, Sims & Koonce, ). This effect was most significant when readers read longer texts than when they read short texts (Johnson et al , ).…”
Section: Resultsmentioning
confidence: 99%
“…Accompanying the wide use of OSM is the increasing research attention on relevant areas including the comparability between paper‐based marking and OSM (eg, Geranpayeh, ; Johnson, Hopkin, Shiell & Bell, ; Johnson, Nádas & Bell, ), the development of OSM systems (eg, Campbell, ; Ramakrishna, Navya Sree, Sri Harish, Swarna & Vasundhara, ), as well as markers' attitudes towards OSM (eg, Coniam, , ; Yan & Coniam, ). Nevertheless, the literature about marking issues in the new assessment environment remains limited.…”
Section: Introductionmentioning
confidence: 99%
“…It is worth mentioning that several studies conducted in the United Kingdom and Hong Kong have lent substantial support to comparability (Coniam & Yan, 2016;Johnson et al, 2012;Johnson, Nádas, & Bell, 2010;Raikes, Greatorex, & Shaw, 2004). In these studies, the rater scores awarded to scanned scripts and paper originals were analyzed and contrasted by computing interrater indexes such as exact agreement indexes, Cohen's κ, Pearson's r, Kendall's τ−b, or intraclass correlation coefficients (ICCs).…”
Section: Comparability Of Oss Mode and Pbs Modementioning
confidence: 91%
“…Other lines of research have investigated the effects of annotations such as commenting, circling, and underlining on raters' marking results (Johnson, Hopkin, Shiell, & Bell, 2012;Johnson & Shaw, 2008;Shaw, 2008). Crisp and Johnson (2007) recruited 12 experienced raters to mark scripts written for the General Certificate of Secondary Education (GCSE) Mathematics and Business Studies qualifications in the United Kingdom.…”
Section: Comparability Of Oss Mode and Pbs Modementioning
confidence: 99%
“…Human raters evaluate language samples, refer to scale descriptors and apply judgement and experience to assign a final score, but there is often no quantifiable way of measuring how they weight and combine the various pieces of information in an essay. In reality, we rely on a lot of experiential judgements and knowledge on the part of examiners (Newstead and Dennis, 1994;Greatorex and Bell, 2008;Johnson et al, 2012). It is hard to explain that you know what an assessment is worth when judged against some criteria, but there is definitely an element of 'gut feeling' in marking.…”
Section: Automated Scoringmentioning
confidence: 99%