2022
DOI: 10.1111/medu.14713
|View full text |Cite
|
Sign up to set email alerts
|

Determining influence, interaction and causality of contrast and sequence effects in objective structured clinical exams

Abstract: Introduction Differential rater function over time (DRIFT) and contrast effects (examiners' scores biased away from the standard of preceding performances) both challenge the fairness of scoring in objective structured clinical exams (OSCEs). This is important as, under some circumstances, these effects could alter whether some candidates pass or fail assessments. Benefitting from experimental control, this study investigated the causality, operation and interaction of both effects simultaneously for the first… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 47 publications
0
2
0
Order By: Relevance
“…All simulation is limited by the parameters of the simulation. In this study, we modelled all known substantial in uences on OSCE scores (candidate, station, examiner, and appropriate random variance terms)(18, 19,34), but omitted in uences shown more recently to be minor such as contrast effects or differential rater function over time (35). Importantly, we can't comment on combinations of parameters which we didn't test (for example 60% examiner participation, 3 linking videos or 12% baseline difference) nor can we infer beyond the range of modelled parameters (i.e.…”
Section: Limitationsmentioning
confidence: 99%
“…All simulation is limited by the parameters of the simulation. In this study, we modelled all known substantial in uences on OSCE scores (candidate, station, examiner, and appropriate random variance terms)(18, 19,34), but omitted in uences shown more recently to be minor such as contrast effects or differential rater function over time (35). Importantly, we can't comment on combinations of parameters which we didn't test (for example 60% examiner participation, 3 linking videos or 12% baseline difference) nor can we infer beyond the range of modelled parameters (i.e.…”
Section: Limitationsmentioning
confidence: 99%
“…Their findings suggest that despite following accepted procedures for OSCE conduct, significant differences may persist between groups of examiners which could affect the pass/fail classification of a significant minority of students. Follow-up work has enhanced the technique’s feasibility, 24 and shown that it is adequately robust to several potential confounding influences 25 and variations in implementation. 26 While these findings suggest that examiner-cohort effects are important and support the validity of VESCA for their measurement, VESCA has not yet been used across institutions, so both the likely magnitude of effects which may arise, and the practical implications of applying the method across institutions are unknown.…”
Section: Introductionmentioning
confidence: 99%