Construct: Competence Based Medical Education (CBME) is designed to use workplace-based assessment (WBA) tools to provide observed assessment and feedback on resident competence. Moreover, WBAs are expected to provide evidence beyond that of more traditional mid- or end-of-rotation assessments [e.g., In Training Evaluation Records (ITERs)]. In this study we investigate competence in General Internal Medicine (GIM), by contrasting WBA and ITER assessment tools.Background: WBAs are hypothesized to improve and differentiate written and numerical feedback to support the development and documentation of competence. In this study we investigate residents’ and faculty members’ perceptions of WBA validity, usability, and reliability and the extent to which WBAs differentiate residents’ performance when compared to ITERs. Approach: We used a mixed methods approach over a three-year period, including perspectives gathered from focus groups, interviews, along with numerical and narrative comparisons between WBA and ITERs in one GIM program.Results: Residents indicated that the narrative component of feedback was more constructive and effective than numerical scores. They perceived the focus on specific workplace-based feedback was more effective than ITERs. However, quantitative analysis showed that overall rates of actionable feedback, including both ITERs and WBAs, were low (26%), with only 9% providing an improvement strategy. The provision of quality feedback was not statistically significantly different between tools; although WBAs provided more actionable feedback, ITERs provided more strategies. Statistical analyses showed that more than half of all assessments came from 11 core faculty.Conclusions: Participants in this study viewed narrative, actionable and specific feedback as essential, and an overall preference was found for written feedback over numerical assessments. However, quantitative analyses showed that specific actionable feedback was rarely documented, despite qualitative emphasis from both groups of its importance for developing competency. Neither formative WBAs or summative ITERs clearly provided better feedback, and both may still have a role in overall resident evaluation. Participant views differed in roles and responsibilities, with residents stating that faculty should be responsible for initiating assessments and vice-versa. These results reveal a disconnect between resident and faculty perceptions and practice around giving feedback and emphasize opportunities for programs adopting and implementing CBME to address how best to support residents and frontline clinical teachers.
First described in 1959, intravascular lymphoma (IVL) remains one of the most clinically challenging diagnoses due to its diverse and non-specific clinical manifestations and evasiveness in detection by standard investigations. Indeed, IVL deserves the title of 'medicine's greatest imitator'. We highlight a case of IVL where the diagnosis came too late in the clinical course, detected by random skin biopsy. Clinicians should strongly consider this diagnosis in presentations with persistent symptomatology despite appropriate interventions.
Defining General Internal Medicine (GIM) has been difficult due to the tension between ensuring flexibility for varied environments and the need for national standards. With the launch of competency-based medical education, the Royal College of Physicians and Surgeons of Canada Specialty Committee in GIM (SCGIM) (national standard-setting body) had the opportunity to explicitly define the discipline via elaboration of the GIM competencies and Entrustable Professional Activities (EPAs). Defining the EPAs is the essence of defining the tasks of the discipline. We describe our SCGIM approach to the continuous review of the theoretical written documentation around EPAs in the “real world environment” in order to continuously refine the EPAs and ensure they are facilitating skill attainment. Major lessons learned (1) centralized feedback with simple reporting and multiple input is best; (2) there is tension between theory (perfect EPAs) and practical implementation; (3) it takes time to see how the EPAs are performing. RésuméIl a été difficile de définir la médecine interne générale (MIG) en raison de la tension entre la nécessité d’assurer la souplesse pour des environnements variés et le besoin de normes nationales. Grâce au lancement de la formation médicale par compétences, le comité de spécialité en MIG (CSMIG) du Collège royal des médecins et chirurgiens du Canada (organisme national de normalisation) a eu l’occasion de définir explicitement la discipline en élaborant les compétences en MIG et les activités professionnelles confiables (APC). Définir les APC consiste essentiellement à définir les tâches de la discipline. Nous décrivons notre approche du CSMIG de l’examen continu de la documentation théorique écrite concernant les APC « en situation réelle » pour améliorer continuellement les APC et veiller à ce qu’elles facilitent l’acquisition des compétences. Principales leçons apprises : a) la rétroaction centralisée avec des rapports simples et des commentaires multiples convi-ent le mieux; b) il existe une tension entre la théorie (APC parfaites) et l’application dans la pratique; c) il faut du temps avant de constater le rendement des APC.
In competency-based medical education (CBME), assessment is learner-driven; learners may fail to progress if assessments are not completed. The General Internal Medicine (GIM) program at Queen’s University uses an educational technique known as scaffolding in its assessment strategy. The program applies this technique to coordinate early assessments with specific scheduled learning experiences and gradually releases the responsibility for assessment initiation to residents. Although outcomes of this innovation are still under investigation, we feel it has been valuable in supporting resident assessment capture and timely progression through stages of training. Other residency training programs could easily implement this technique to support the transition to Competency by Design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.