Objectives: There is currently no consolidated list of existing simulation fellowship programs in emergency medicine (EM). In addition, there are no universally accepted or expected standards for core curricular content. The objective of this project is to develop consensus-based core content for EM simulation fellowships to help frame the critical components of such training programs.Methods: This paper delineates the process used to develop consensus curriculum content for EM simulation fellowships. EM simulation fellowship curricula were collected. Curricular content was reviewed and compiled by simulation experts and validated utilizing survey methodology, and consensus was obtained using a modified Delphi methodology.Results: Fifteen EM simulation fellowship curricula were obtained and analyzed. Two rounds of a modified Delphi survey were conducted. The final proposed core curriculum content contains 47 elements in nine domains with 14 optional elements. Conclusion:The proposed consensus content will provide current and future fellowships a foundation on which to build their own specific and detailed fellowship curricula. Such standardization will ultimately increase the transparency of training programs for future trainees and potential employers.
Objectives: Emergency medicine (EM) faculty often aim to improve resident performance by enhancing the quality and delivery of feedback. The acceptance and integration of external feedback is influenced by multiple factors. However, it is interpreted through the "lens" of the learner's own self-assessment. Ideally, following an educational activity with feedback, a learner should be able to generate and act upon specific learning goals to improve performance. Examining the source of generated learning goals, whether from one's self-assessment or from external feedback, might shed light on the factors that lead to improvement and guide educational initiatives. Using a standard oral board scenario, the objective of this study was to determine the effects that residents' self-assessment and specific feedback from faculty have on not only the generation of learning goals but also the execution of these goals for performance improvement.Methods: In this cross-sectional educational study at four academic programs, 72 senior EM residents participated in a standardized oral board scenario. Following the scenario, residents completed a self-assessment form. Next, examiners used a standardized checklist to provide both positive and negative feedback. Subsequently, residents were asked to generate "SMART" learning goals (specific, measurable, attainable, realistic, and time-bound). The investigators categorized the learning goals as stemming from the residents' self-assessments, feedback, or both. Within 4 weeks, the residents were asked to recall their learning goals and describe any actions taken to achieve those goals. These were grouped into similar categories. Descriptive statistics were used to summarize the data.Results: A total of 226 learning goals were initially generated (mean AE SD = 3.1 AE 1.3 per resident). Forty-seven percent of the learning goals were generated by the residents' self-assessments only, while 27% were generated by the feedback given alone. Residents who performed poorly on the case incorporated feedback more often than high performers when generating learning goals. Follow-up data collection showed that 62 residents recalled 89 learning goals, of which 52 were acted upon. On followup, the numbers of learning goals from self-assessment and feedback were equal (25% each, 13 of 52), while the greatest number of reportedly executed learning goals came from self-assessments and feedback in agreement (40%). Conclusions:Following feedback on an oral board scenario, residents generated the majority of their learning goals from their own self-assessments. Conversely, at the follow-up period, they recalled an increased number of learning goals stemming from feedback, while the largest proportion of learning goals acted upon stemmed from both feedback and self-assessments in
Introduction Airway management is a fundamental skill of emergency medicine (EM) practice, and suboptimal management leads to poor outcomes. Endotracheal intubation (ETI) is a procedure that is specifically taught in residency, but little is known how best to maintain proficiency in this skill throughout the practitioner’s career. The goal of this study was to identify how the frequency of intubation correlated with measured performance. Methods We assessed 44 emergency physicians for proficiency at ETI by direct laryngoscopy on a simulator. The electronic health record was then queried to obtain their average number of annual ETIs and the time since their last ETI, supervised and individually performed, over a two-year period. We evaluated the strength of correlation between these factors and assessment scores, and then conducted a receiver operator characteristic (ROC) curve analysis to identify factors that predicted proficient performance. Results The mean score was 81% (95% confidence interval, 76% – 86%). Scores correlated well with the mean number of ETIs performed annually and with the mean number supervised annually (r = 0.6, p = 0.001 for both). ROC curve analysis identified that physicians would obtain a proficient score if they had performed an average of at least three ETIs annually (sensitivity = 90%, specificity = 64%, AUC = 0.87, p = 0.001) or supervised an average of at least five ETIs annually (sensitivity = 90%, specificity = 59%, AUC = 0.81, p = 0.006) over the previous two years. Conclusion Performing at least three or supervising at least five ETIs annually, averaged over a two-year period, predicted proficient performance on a simulation-based skills assessment. We advocate for proactive maintenance and enhancement of skills, particularly for those who infrequently perform this procedure.
IntroductionGoal setting is used in education to promote learning and performance. Debriefing after clinical scenario-based simulation is a well-established practice that provides learners a defined structure to review and improve performance. Our objective was to integrate formal learning goal generation, using the SMART framework (Specific, Measurable, Attainable, Realistic, and Time-bound), into standard debriefing processes (i.e., “SMART Goal Enhanced Debriefing”) and subsequently measure the impact on the development of learning goals and execution of educational actions.MethodsThis was a prospective multicenter randomized controlled study of 80 emergency medicine residents at three academic hospitals comparing the effectiveness of SMART Goal Enhanced Debriefing to a standard debriefing. Residents were block randomized on a rolling basis following a simulation case. SMART Goal Enhanced Debriefing included five minutes of formal instruction on the development of SMART learning goals during the summary/application phase of the debrief. Outcome measures included the number of recalled learning goals, self-reported executed educational actions, and quality of each learning goal and educational action after a two-week follow-up period.ResultsThe mean number of reported learning goals was similar in the standard debriefing group (mean 2.05 goals, SD 1.13, n=37 residents), and in the SMART Goal Enhanced Debriefing group (mean 1.93, SD 0.96, n=43), with no difference in learning goal quality. Residents receiving SMART Goal Enhanced Debriefing completed more educational actions on average (Control group actions completed 0.97 (SD 0.87), SMART debrief group 1.44 (SD 1.03) p=0.03).ConclusionThe number and quality of learning goals reported by residents was not improved as a result of SMART Goal Enhanced Debriefing. Residents did, however, execute more educational actions, which is consistent with the overarching intent of any educational intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.