Objectives: Multisource feedback (MSF) has potential value in learner assessment, but has not been broadly implemented nor studied in emergency medicine (EM). This study aimed to adapt existing MSF instruments for emergency department implementation, measure feasibility, and collect initial validity evidence to support score interpretation for learner assessment.
Methods:Residents from eight U.S. EM residency programs completed a self-assessment and were assessed by eight physicians, eight nonphysician colleagues, and 25 patients using unique instruments. Instruments included a five-point rating scale to assess interpersonal and communication skills, professionalism, systemsbased practice, practice-based learning and improvement, and patient care. MSF feasibility was measured by percentage of residents who collected the target number of instruments. To develop internal structure validity evidence, Cronbach's alpha was calculated as a measure of internal consistency.Results: A total of 125 residents collected a mean of 7.0 physician assessments (n = 752), 6.7 nonphysician assessments (n = 775), and 17.8 patient assessments (n = 2,100) with respective response rates of 67.2, 75.2, and 77.5%. Cronbach's alpha values for physicians, nonphysicians, patients, and self were 0. 97, 0.97, 0.96, and 0.96, respectively. Conclusions: This study demonstrated that MSF implementation is feasible, although challenging. The tool and its scale demonstrated excellent internal consistency. EM educators may find the adaptation process and tools applicable to their learners.A s residency programs strive to develop, implement, and refine assessment strategies that align with the Accreditation Council on Graduate Medical Education (ACGME)'s Next Accreditation System, program directors are approaching a common problem: How can workplace-based assessment best capture learner performance in a way that provides meaningful data to inform clinical competency