ContextThe electronic health record (EHR) has been identified as a potential site for gathering data about trainees' clinical performance, but these data are not collected or organised for this purpose. Therefore, a careful and rigorous approach is required to explore how EHR data could be meaningfully used for assessment purposes. The purpose of this study was to identify EHR performance metrics that represent both the independent and interdependent clinical performance of emergency medicine (EM) trainees and explore how they might be meaningfully used for assessment and feedback.MethodsUsing constructivist grounded theory, we conducted 21 semi‐structured interviews with EM faculty members and residents. Participants were asked to identify the clinical actions of trainees that would be valuable for assessment and feedback and describe how those activities are represented in the EHR. Data collection and analysis, which consisted of three stages of coding, occurred iteratively.ResultsWhen faculty members and trainees in EM were asked to reflect on the usefulness of using EHR performance metrics for resident assessment and feedback they expressed both widespread support for the idea in principle and hesitation that aspects of clinical performance captured in the data would not be representative of residents’ individual performance, but would rather reflect their interdependence with other team members and the systems in which they work. We highlight three categorisations of system‐level interdependence ‐ medical directives, technological systems and organisational systems ‐ identified by our participants, and discuss strategies participants employed to navigate these forms of interdependence within the health care system.ConclusionsSystem‐level interdependence shapes physicians’ performances, and yet, this impact is rarely corrected for or noted within clinical performance data. Educators have a responsibility to recognise system‐level interdependence when teaching and consider system‐level interdependence when assessing the performance of trainees in order to most effectively and fairly utilise the EHR as a source of assessment data.
Objectives: Competency-based medical education requires that residents are provided with frequent opportunities to demonstrate competence as well as receive effective feedback about their clinical performance. To meet this goal, we investigated how data collected by the electronic health record (EHR) might be used to assess emergency medicine (EM) residents' independent and interdependent clinical performance and how such information could be represented in an EM resident report card.Methods: Following constructivist grounded theory methodology, individual semistructured interviews were conducted in 2017 with 10 EM faculty and 11 EM residents across all 5 postgraduate years. In addition to open-ended questions, participants were presented with an emerging list of EM practice metrics and asked to comment on how valuable each would be in assessing resident performance. Additionally, we asked participants the extent to which each metric captured independent or interdependent performance. Data collection and analysis were iterative; analysis employed constant comparative inductive methods.Results: Participants refined and eliminated metrics as well as added new metrics specific to the assessment of EM residents (e.g., time between signup and first orders). These clinical practice metrics based on data from our EHR database were organized along a spectrum of independent/interdependent performance. We conclude with discussions about the relationship among these metrics, issues in interpretation, and implications of using EHR for assessment purposes.Conclusions: Our findings document a systematic approach for developing EM resident assessments, based on EHR data, which incorporate the perspectives of both clinical faculty and residents. Our work has important implications for capturing residents' contributions to clinical performances and distinguishing between independent and interdependent metrics in collaborative workplace-based settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.