BackgroundDuring 2020, the COVID-19 pandemic caused worldwide disruption to the delivery of clinical assessments, requiring medicals schools to rapidly adjust their design of established tools. Derived from the traditional face-to-face Objective Structured Clinical Examination (OSCE), the virtual OSCE (vOSCE) was delivered online, using a range of school-dependent designs. The quality of these new formats was evaluated remotely through virtual quality assurance (vQA). This study synthesizes the vOSCE and vQA experiences of stakeholders from participating Australian medical schools based on a Quality framework.MethodsThis study utilized a descriptive phenomenological qualitative design. Focus group discussions (FGD) were held with 23 stakeholders, including examiners, academics, simulated patients, professional staff, students and quality assurance examiners. The data was analyzed using a theory-driven conceptual Quality framework.ResultsThe vOSCE was perceived as a relatively fit-for purpose assessment during pandemic physical distancing mandates. Additionally, the vOSCE was identified as being value-for-money and was noted to provide procedural benefits which lead to an enhanced experience for those involved. However, despite being largely delivered fault-free, the current designs are considered limited in the scope of skills they can assess, and thus do not meet the established quality of the traditional OSCE.ConclusionsWhilst virtual clinical assessments are limited in their scope of assessing clinical competency when compared with the traditional OSCE, their integration into programs of assessment does, in fact, have significant potential. Scholarly review of stakeholder experiences has elucidated quality aspects that can inform iterative improvements to the design and implementation of future vOSCEs.
Background Objective structured clinical examinations (OSCEs) are commonly used to assess the clinical skills of health professional students. Examiner judgement is one acknowledged source of variation in candidate marks. This paper reports an exploration of examiner decision making to better characterise the cognitive processes and workload associated with making judgements of clinical performance in exit‐level OSCEs. Methods Fifty‐five examiners for exit‐level OSCEs at five Australian medical schools completed a NASA Task Load Index (TLX) measure of cognitive load and participated in focus group interviews immediately after the OSCE session. Discussions focused on how decisions were made for borderline and clear pass candidates. Interviews were transcribed, coded and thematically analysed. NASA TLX results were quantitatively analysed. Results Examiners self‐reported higher cognitive workload levels when assessing a borderline candidate in comparison with a clear pass candidate. Further analysis revealed five major themes considered by examiners when marking candidate performance in an OSCE: (a) use of marking criteria as a source of reassurance; (b) difficulty adhering to the marking sheet under certain conditions; (c) demeanour of candidates; (d) patient safety, and (e) calibration using a mental construct of the 'mythical [prototypical] intern'. Examiners demonstrated particularly higher mental demand when assessing borderline compared to clear pass candidates. Conclusions Examiners demonstrate that judging candidate performance is a complex, cognitively difficult task, particularly when performance is of borderline or lower standard. At programme exit level, examiners intuitively want to rate candidates against a construct of a prototypical graduate when marking criteria appear not to describe both what and how a passing candidate should demonstrate when completing clinical tasks. This construct should be shared, agreed upon and aligned with marking criteria to best guide examiner training and calibration. Achieving this integration may improve the accuracy and consistency of examiner judgements and reduce cognitive workload.
The Objective Structured Clinical Examination (OSCE) has been traditionally viewed as a highly valued tool for assessing clinical competence in health professions education. However, as the OSCE typically consists of a large-scale, face-to-face assessment activity, it has been variably criticized over recent years due to the extensive resourcing and relative expense required for delivery. Importantly, due to COVID-pandemic conditions and necessary health guidelines in 2020 and 2021, logistical issues inherent with OSCE delivery were exacerbated for many institutions across the globe. As a result, alternative clinical assessment strategies were employed to gather assessment datapoints to guide decision-making regarding student progression. Now, as communities learn to “live with COVID”, health professions educators have the opportunity to consider what weight should be placed on the OSCE as a tool for clinical assessment in the peri-pandemic world. In order to elucidate this timely clinical assessment issue, this qualitative study utilized focus group discussions to explore the perceptions of 23 clinical assessment stakeholders (examiners, students, simulated patients and administrators) in relation to the future role of the traditional OSCE. Thematic analysis of the FG transcripts revealed four major themes in relation to participants' views on the future of the OSCE vis-a-vis other clinical assessments in this peri-pandemic climate. The identified themes are (a) enduring value of the OSCE; (b) OSCE tensions; (c) educational impact; and (d) the importance of programs of assessment. It is clear that the OSCE continues to play a role in clinical assessments due to its perceived fairness, standardization and ability to yield robust results. However, recent experiences have resulted in a diminishing and refining of its role alongside workplace-based assessments in the new, peri-pandemic programs of assessment. Future programs of assessment should consider the strategic positioning of the OSCE within the context of utilizing a range of tools when determining students' clinical competence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.