2005
DOI: 10.1080/01421590400013461
|View full text |Cite
|
Sign up to set email alerts
|

Ensuring reliability in UK written tests of general practice: the MRCGP examination 1998–2003

Abstract: Reliability in written examinations is taken very seriously by examination boards and candidates alike. Within general education many factors influence reliability including variations between markers, within markers, within candidates and within teachers. Mechanisms designed to overcome, or at least minimize, the impact of such variables are detailed. Methods of establishing reliability are also explored in the context of a range of assessment situations. In written tests of general practice within the Member… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 2 publications
0
6
0
Order By: Relevance
“…Our literature review found only one study that discussed and evaluated interrater results of essay-style qualification exams and found reliabilities ranging from 0.59 to 0.69 that utilized a 1–5-point rating with four faculty readers reviewing 26 student responses (Burck & Peterson, 1983). Articles by Forster and Masters (1996), Marsh and Ireland (1987), Munro et al (2005), and Wakeford and Roberts (1984) provide some discussion of processes to assess and improve interrater reliabilities of written essay-style exams.…”
Section: Discussionmentioning
confidence: 99%
“…Our literature review found only one study that discussed and evaluated interrater results of essay-style qualification exams and found reliabilities ranging from 0.59 to 0.69 that utilized a 1–5-point rating with four faculty readers reviewing 26 student responses (Burck & Peterson, 1983). Articles by Forster and Masters (1996), Marsh and Ireland (1987), Munro et al (2005), and Wakeford and Roberts (1984) provide some discussion of processes to assess and improve interrater reliabilities of written essay-style exams.…”
Section: Discussionmentioning
confidence: 99%
“…These examinations have introduced the elements of both continuous assessment and assessment of ENT practical skills to enhance our undergraduate ENT course. ‘Open form’ or ‘context-rich’ assessments are most suitable for evaluating the application of knowledge and higher level abilities 27 , 28 . Multiple levels of Miller's hierarchical framework of assessment 4 are now included in our course.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, performance and practical skills are evaluated using an objective structured clinical examination. While it is established that essay and short answer question papers are resource intensive in comparison to machine-markable assessments, 27 the advantage of a dedicated ENT tutor at our institution facilitates the use of these written assessment methods. To save on resources, other institutions may prefer to develop tests of knowledge based on multiple-choice questions.…”
Section: Discussionmentioning
confidence: 99%
“…Peer review is an established method for giving feedback and non-threatening quality control in teaching and assessment in a variety of situations [ 14 - 17 ] but its use in review of marking is limited so far. Peer calibration prior to marking a written paper that does not involve meeting and discussing marking interpretation has been described [ 18 , 19 ], but we would advocate face to face structured peer review following the initial marking of a small sample of papers. We have explored this technique [ 13 ], and found markers thought it valuable regardless of their level of experience.…”
Section: Discussionmentioning
confidence: 99%