2019
DOI: 10.1080/0142159x.2019.1579902
|View full text |Cite
|
Sign up to set email alerts
|

Examiner training: A study of examiners making sense of norm-referenced feedback

Abstract: Examiner training has an inconsistent impact on subsequent performance. To understand this variation, we explored how examiners think about changing the way they assess. Method We provided comparative data to seventeen experienced examiners about their assessments, captured their sense-making processes using a modified think-aloud protocol, and identified patterns by inductive thematic analysis. Results We observed five sense-making processes: (1) testing personal relevance (2) interpretation (3) attribution (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…In terms of alleviating the problem of excessive variation in cut-score stringency, the literature suggests that feedback to examiners on their judgments can sometimes help to reduce (score) stringency (Wong et al 2020 ), whilst recognising that this is a complex area and is not always effective (Crossley et al 2019 ; Gingerich et al 2011 ).The linear mixed modelling automatically produces a measure of cut-score stringency for each examiner, and this could form part of feedback to them of their performance relative to their peers. This information would have to be carefully mediated as it might be difficult for examiners to interpret or act on it compared to the more conventional feedback on scores.…”
Section: Discussionmentioning
confidence: 99%
“…In terms of alleviating the problem of excessive variation in cut-score stringency, the literature suggests that feedback to examiners on their judgments can sometimes help to reduce (score) stringency (Wong et al 2020 ), whilst recognising that this is a complex area and is not always effective (Crossley et al 2019 ; Gingerich et al 2011 ).The linear mixed modelling automatically produces a measure of cut-score stringency for each examiner, and this could form part of feedback to them of their performance relative to their peers. This information would have to be carefully mediated as it might be difficult for examiners to interpret or act on it compared to the more conventional feedback on scores.…”
Section: Discussionmentioning
confidence: 99%
“…[38] [39] [31] In the broader assessment arena, there is a general skepticism that further training of judges makes much impact on reducing these systematic sources of error in their interviewing performance, [24,40] although some approaches for giving speci c feedback back to examiners are thought to have promise. [41,42] Study Aims and research questions.…”
Section: Interviewer Judgement In the Multiple Mini-interviewmentioning
confidence: 99%