IntroductionMini Clinical Evaluation Exercise (Mini-CEX) and Direct Observation of Procedural Skills (DOPS) are used as formative assessments worldwide. Since an up-to-date comprehensive synthesis of the educational impact of Mini-CEX and DOPS is lacking, we performed a systematic review. Moreover, as the educational impact might be influenced by characteristics of the setting in which Mini-CEX and DOPS take place or their implementation status, we additionally investigated these potential influences.MethodsWe searched Scopus, Web of Science, and Ovid, including All Ovid Journals, Embase, ERIC, Ovid MEDLINE(R), and PsycINFO, for original research articles investigating the educational impact of Mini-CEX and DOPS on undergraduate and postgraduate trainees from all health professions, published in English or German from 1995 to 2016. Educational impact was operationalized and classified using Barr’s adaptation of Kirkpatrick’s four-level model. Where applicable, outcomes were pooled in meta-analyses, separately for Mini-CEX and DOPS. To examine potential influences, we used Fisher’s exact test for count data.ResultsWe identified 26 articles demonstrating heterogeneous effects of Mini-CEX and DOPS on learners’ reactions (Kirkpatrick Level 1) and positive effects of Mini-CEX and DOPS on trainees’ performance (Kirkpatrick Level 2b; Mini-CEX: standardized mean difference (SMD) = 0.26, p = 0.014; DOPS: SMD = 3.33, p<0.001). No studies were found on higher Kirkpatrick levels. Regarding potential influences, we found two implementation characteristics, “quality” and “participant responsiveness”, to be associated with the educational impact.ConclusionsDespite the limited evidence, the meta-analyses demonstrated positive effects of Mini-CEX and DOPS on trainee performance. Additionally, we revealed implementation characteristics to be associated with the educational impact. Hence, we assume that considering implementation characteristics could increase the educational impact of Mini-CEX and DOPS.
Our model of influencing factors might help to further improve the use of Mini-CEX and DOPS and serve as basis for future research.
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm for MTF items regarding reliability, difficulty index and item discrimination? How do the psychometric characteristics of different scoring algorithms compare to those of Type A questions used in the same exams? We used data from 37 medical exams conducted in 2015 (998 MTF and 2163 Type A items overall). Using repeated measures analyses of variance (rANOVA), we compared reliability, difficulty and item discrimination of different scoring algorithms for MTF with four answer options and Type A. Scoring algorithms for MTF were dichotomous scoring (DS) and two partial credit scoring algorithms, PS where examinees receive half a point if more than half of true/false ratings were marked correctly and one point if all were marked correctly, and PS where examinees receive a quarter of a point for every correct true/false rating. The two partial scoring algorithms showed significantly higher reliabilities (α = 0.75; α = 0.75; α = 0.70, α = 0.72), which corresponds to fewer items needed for a reliability of 0.8 (n = 74; n = 75; n = 103, n = 87), and higher discrimination indices (r = 0.33; r = 0.33; r = 0.30; r = 0.28) than dichotomous scoring and Type A. Items scored with DS tend to be difficult (p = 0.50), whereas items scored with PS become easy (p = 0.82). PS and Type A cover the whole range, from easy to difficult items (p = 0.66; p = 0.73). Partial credit scoring leads to better psychometric results than dichotomous scoring. PS covers the range from easy to difficult items better than PS. Therefore, for scoring MTF, we suggest using PS.
Introduction: Multisource feedback (MSF), also called 360-degree assessment, is one form of assessment used in postgraduate training. However, there is an ongoing discussion on its value, because the factors that influence the impact of MSF and the main impact of MSF are not fully understood. In this study, we investigated both the influencing factors and the impact of MSF on residency training. Methods:We conducted a qualitative case study within the boundaries of the residency training for paediatricians and paediatric surgeons at a University Hospital. We collected data from seven focus group interviews with stakeholders of MSF (residents, raters and supervisors). By performing a reflexive thematic analysis, we extracted the influencing factors and the impact of MSF.Results: We found seven influencing factors: MSF is facilitated by the announcement of a clear goal of MSF, the training of raters on the MSF instrument, a longitudinal approach of observation, timing not too early and not too late during the rotation, narrative comments as part of the ratings, the residents' self-assessment and a supervisor from the same department. We found three themes on the impact of MSF: MSF supports the professional development of residents, enhances interprofessional teamwork and increases the raters' commitment to the training of residents. Conclusion:This study illuminates the influencing factors and impact of MSF on residency training. We offer novel recommendations on the continuity of observation, the timing during rotations and the role of the supervisor. Moreover, by discussing our results through the lens of identity formation theory, this work advances our conceptual understanding of MSF. We propose identity formation theory as a framework for future research on MSF to leverage the potential of MSF in residency training. | INTRODUCTIONIf not executed well, multisource feedback (MSF) can feel like a waste of time. Thus, there is an ongoing discussion on the value of MSF. We Abbreviation: MSF, multisource feedback.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.