Background Teaching residents how to teach is a critical part of resident education because residents are often the major teachers of medical students. The importance of formal residents-as-teachers (RAT) curricula has been emphasized throughout the literature, yet not all residency programs have such a curriculum in place. Objective The purpose of our study was to (1) review the medical education literature for established RAT curricula, (2) assess published curricula's reproducibility, (3) evaluate the type of outcomes achieved using the Kirkpatrick model of evaluation, and (4) identify curricula that training programs could feasibly adopt. Methods We performed a literature review using PubMed, Medline, Scopus, PsycINFO, ERIC, and Embase. Key search words included residents, residents as teachers, teaching, internship and residency, and curriculum. In addition, a search of MedEdPORTAL was performed using the same key terms. Articles were evaluated based on the reproducibility of curricula and the assessment tools. Evaluation of educational outcomes was performed using the Kirkpatrick model. Results Thirty-nine articles were deemed appropriate for review. Interventions and evaluation techniques varied greatly. Only 1 article from the literature was deemed to have both curricula and assessments that would be fully reproducible by other programs. Conclusions A literature review on RAT curricula found few articles that would be easily reproduced for residency programs that want to start or improve their own RAT curricula. It also demonstrated the difficulty and lack of rigorous outcome measurements for most curricula.
Despite an Accreditation Council for Graduate Medical Education requirement for a RAT curriculum, some pediatrics programs still lack one, and some consider their program only moderately effective. A wealth of curricular material exists across programs, which could be shared nationally. Establishing a national RAT curriculum would offer programs resources to meet educational mandates and the ability to tailor programs to best fit their own program needs.
Background The Pediatrics Milestone Project uses behavioral anchors, narrative descriptions of observable behaviors, to describe learner progression through the Accreditation Council for Graduate Medical Education competencies. Starting June 2014, pediatrics programs were required to submit milestone reports for their trainees semiannually. Likert-type scale assessment tools were not designed to inform milestone reporting, creating a challenge for Clinical Competency Committees. Objective To determine if milestone-based assessments better stratify trainees by training level compared to Likert-type assessments. Methods We compared assessment results for 3 subcompetencies after changing from a 5-point Likert scale to milestone-based behavioral anchors in July 2013. Program leadership evaluated the new system by (1) comparing PGY-1 mean scores on Likert-type versus milestone-based assessments; and (2) comparing mean scores on the Likert-type versus milestone-based assessments across PGY levels. Results Mean scores for PGY-1 residents were significantly higher on the prior year's Likert-type assessments than milestone-based assessments for all 3 subcompetencies (P < .01). Stratification by PGY level was not observed with Likert-type assessments (eg, interpersonal and communication skills 1 [ICS1] mean score for PGY-1, 3.99 versus PGY-3, 3.98; P = .98). In contrast, milestone-based assessments demonstrated stratification by PGY level (eg, the ICS1 mean score was 3.06 for PGY-1, 3.83 for PGY-2, and 3.99 for PGY-3; P < .01 for PGY-1 versus PGY-3). Significantly different means by trainee level were noted across 21 subcompetencies on milestone-based assessments (P < .01 for PGY-1 versus PGY-3). Conclusions Initial results indicate milestone-based assessments stratify trainee performance by level better than Likert-type assessments. Average PGY-level scores from milestone-based assessments may ultimately provide guidance for determining whether trainees are progressing at the expected pace.
The predominant learning preferences among a cohort of pediatric residents from a single training program were consistent with a preference for interactive learning, suggesting that some trainees may benefit from supplementation of educational curricula with additional interactive experiences. Continued investigation is needed in this area to assess the effectiveness of adapting teaching techniques to individual learning preferences.
After participating in a group peer-review exercise at a workshop presented by Academic Medicine and MedEdPORTAL editors at the 2015 Association of American Medical Colleges Medical Education Meeting, the authors realized that the way their work group reviewed a manuscript was very different from the way by which they each would have reviewed the paper as an individual. Further, the group peer-review process yielded more robust feedback for the manuscript's authors than did the traditional individual peer-review process. This realization motivated the authors to reconvene and collaborate to write this Commentary to share their experience and propose the expanded use of group peer review in medical education scholarship.The authors consider the benefits of a peer-review process for reviewers, including learning how to improve their own manuscripts. They suggest that the benefits of a team review model may be similar to those of teamwork and team-based learning in medicine and medical education. They call for research to investigate this, to provide evidence to support group review, and to determine whether specific paper types would benefit most from team review (e.g., particularly complex manuscripts, those receiving widely disparate initial individual reviews). In addition, the authors propose ways in which a team-based approach to peer review could be expanded by journals and institutions. They believe that exploring the use of group peer review potentially could create a new methodology for skill development in research and scholarly writing and could enhance the quality of medical education scholarship.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.