ContextThe electronic health record (EHR) has been identified as a potential site for gathering data about trainees' clinical performance, but these data are not collected or organised for this purpose. Therefore, a careful and rigorous approach is required to explore how EHR data could be meaningfully used for assessment purposes. The purpose of this study was to identify EHR performance metrics that represent both the independent and interdependent clinical performance of emergency medicine (EM) trainees and explore how they might be meaningfully used for assessment and feedback.MethodsUsing constructivist grounded theory, we conducted 21 semi‐structured interviews with EM faculty members and residents. Participants were asked to identify the clinical actions of trainees that would be valuable for assessment and feedback and describe how those activities are represented in the EHR. Data collection and analysis, which consisted of three stages of coding, occurred iteratively.ResultsWhen faculty members and trainees in EM were asked to reflect on the usefulness of using EHR performance metrics for resident assessment and feedback they expressed both widespread support for the idea in principle and hesitation that aspects of clinical performance captured in the data would not be representative of residents’ individual performance, but would rather reflect their interdependence with other team members and the systems in which they work. We highlight three categorisations of system‐level interdependence ‐ medical directives, technological systems and organisational systems ‐ identified by our participants, and discuss strategies participants employed to navigate these forms of interdependence within the health care system.ConclusionsSystem‐level interdependence shapes physicians’ performances, and yet, this impact is rarely corrected for or noted within clinical performance data. Educators have a responsibility to recognise system‐level interdependence when teaching and consider system‐level interdependence when assessing the performance of trainees in order to most effectively and fairly utilise the EHR as a source of assessment data.
Frequentist confidence intervals were compared with Bayesian credible intervals under a variety of scenarios to determine when Bayesian credible intervals outperform frequentist confidence intervals. Results indicated that Bayesian interval estimation frequently produces results with precision greater than or equal to the frequentist method.
Objectives: Competency-based medical education requires that residents are provided with frequent opportunities to demonstrate competence as well as receive effective feedback about their clinical performance. To meet this goal, we investigated how data collected by the electronic health record (EHR) might be used to assess emergency medicine (EM) residents' independent and interdependent clinical performance and how such information could be represented in an EM resident report card.Methods: Following constructivist grounded theory methodology, individual semistructured interviews were conducted in 2017 with 10 EM faculty and 11 EM residents across all 5 postgraduate years. In addition to open-ended questions, participants were presented with an emerging list of EM practice metrics and asked to comment on how valuable each would be in assessing resident performance. Additionally, we asked participants the extent to which each metric captured independent or interdependent performance. Data collection and analysis were iterative; analysis employed constant comparative inductive methods.Results: Participants refined and eliminated metrics as well as added new metrics specific to the assessment of EM residents (e.g., time between signup and first orders). These clinical practice metrics based on data from our EHR database were organized along a spectrum of independent/interdependent performance. We conclude with discussions about the relationship among these metrics, issues in interpretation, and implications of using EHR for assessment purposes.Conclusions: Our findings document a systematic approach for developing EM resident assessments, based on EHR data, which incorporate the perspectives of both clinical faculty and residents. Our work has important implications for capturing residents' contributions to clinical performances and distinguishing between independent and interdependent metrics in collaborative workplace-based settings.
Purpose Feedback continues to present a challenge for competency-based medical education. Clear, consistent, and credible feedback is vital to supporting one’s ongoing development, yet it can be difficult to gather clinical performance data about residents. This study sought to determine whether providing residents with electronic health record (EHR)-based report cards, as well as an opportunity to discuss these data with faculty trained using the R2C2 model, can help residents understand and interpret their clinical performance metrics. Method Using action research methodology, the author team collected EHR data from July 2017 to February 2020, for all residents (n = 21) in one 5-year Emergency Medicine program and created personalized report cards for each resident. During October 6–17, 2020, 8 out of 17 eligible residents agreed to have their feedback conversations recorded and participate in a subsequent interview with a nonphysician member of the research team. Data were analyzed using thematic analysis, and the authors used inductive analysis to identify themes in the data. Results In analyzing both the feedback conversations as well as the individual interviews with faculty and residents, the authors identified 2 main themes: (1) Reactions and responses to receiving personalized EHR data and (2) The value of EHR data for assessment and feedback purposes. All participants believed that EHR data metrics are useful for prompting self-reflection, and many pointed to their utility in providing suggestions for actionable changes in their clinical practice. For faculty, having a tool through which underperforming residents can be shown “objective” data about their clinical performance helps underscore the need for improvement, particularly when residents are resistant. Conclusions The EHR is a valuable source of educational data, and this study demonstrates one of the many thoughtful ways it can be used for assessment and feedback purposes.
Introduction: Competency-based medical education (CBME) affirms that trainees will receive timely assessments and effective feedback about their clinical performance, which has inevitably raised concerns about assessment burden. Therefore, we need ways of generating assessments that do not rely exclusively on faculty-produced reports. The main object of this research is to investigate how data already collected in the electronic health record (EHR) might be meaningfully and appropriately used for assessing emergency medicine (EM) trainees independent and interdependent clinical performance. This study represents the first step in exploring what EHR data might be utilized to monitor and assess trainees clinical performance Methods: Following constructivist grounded theory, individual semi-structured interviews were conducted with 10 EM faculty and 11 EM trainees, across all postgraduate years, to identify EHR performance indicators that represent EM trainees independent and interdependent clinical actions and decisions. Participants were presented with a list of performance indicators and asked to comment on how valuable each would be in assessing trainee performance. Data analysis employed constant comparative inductive methods and occured throughout data collection. Results: Participants created, refined, and eliminated performance indicators. Our main result is a catalogue of clinical performance indicators, described by our participants, as reflecting independent and/or interdependent EM trainee performance that are believed to be captured within the EHR. Such independent indicators include: number of patients seen (according to CTAS levels), turnaround time between when a patient is signed up for and first orders are made, number of narcotics prescribed. Meanwhile, interdependent indicators include, but are not limited to, length of stay, bounce-back rates, ordering practices, and time to fluids. Conclusion: Our findings document a process for developing EM trainee report cards that incorporate the perspectives of clinical faculty and trainees. Our work has important implications for distinguishing between independent and interdependent clinical performance indicators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.