ObjectiveTo develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions.Materials and MethodsWe identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions.ResultsWe analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common.DiscussionAcross organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS.ConclusionCDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.
Objective The study sought to describe the literature describing clinical reasoning ontology (CRO)–based clinical decision support systems (CDSSs) and identify and classify the medical knowledge and reasoning concepts and their properties within these ontologies to guide future research. Methods MEDLINE, Scopus, and Google Scholar were searched through January 30, 2019, for studies describing CRO-based CDSSs. Articles that explored the development or application of CROs or terminology were selected. Eligible articles were assessed for quality features of both CDSSs and CROs to determine the current practices. We then compiled concepts and properties used within the articles. Results We included 38 CRO-based CDSSs for the analysis. Diversity of the purpose and scope of their ontologies was seen, with a variety of knowledge sources were used for ontology development. We found 126 unique medical knowledge concepts, 38 unique reasoning concepts, and 240 unique properties (137 relationships and 103 attributes). Although there is a great diversity among the terms used across CROs, there is a significant overlap based on their descriptions. Only 5 studies described high quality assessment. Conclusion We identified current practices used in CRO development and provided lists of medical knowledge concepts, reasoning concepts, and properties (relationships and attributes) used by CRO-based CDSSs. CRO developers reason that the inclusion of concepts used by clinicians’ during medical decision making has the potential to improve CDSS performance. However, at present, few CROs have been used for CDSSs, and high-quality studies describing CROs are sparse. Further research is required in developing high-quality CDSSs based on CROs.
Objective To develop a collection of concept-relationship-concept tuples to formally represent patients’ care context data to inform electronic health record (EHR) development. Materials and Methods We reviewed semantic relationships reported in the literature and developed a manual annotation schema. We used the initial schema to annotate sentences extracted from narrative note sections of cardiology, urology, and ear, nose, and throat (ENT) notes. We audio recorded ENT visits and annotated their parsed transcripts. We combined the results of each annotation into a consolidated set of concept-relationship-concept tuples. We then compared the tuples used within and across the multiple data sources. Results We annotated a total of 626 sentences. Starting with 8 relationships from the literature, we annotated 182 sentences from 8 inpatient consult notes (initial set of tuples = 43). Next, we annotated 232 sentences from 10 outpatient visit notes (enhanced set of tuples = 75). Then, we annotated 212 sentences from transcripts of 5 outpatient visits (final set of tuples = 82). The tuples from the visit transcripts covered 103 (74%) concepts documented in the notes of their respective visits. There were 20 (24%) tuples used across all data sources, 10 (12%) used only in inpatient notes, 15 (18%) used only in visit notes, and 7 (9%) used only in the visit transcripts. Conclusions We produced a robust set of 82 tuples useful to represent patients’ care context data. We propose several applications of our tuples to improve EHR navigation, data entry, learning health systems, and decision support.
Background Clinician notes are structured in a variety of ways. This research pilot tested an innovative study design and explored the impact of note formats on diagnostic accuracy and documentation review time. Objective To compare two formats for clinical documentation (narrative format vs. list of findings) on clinician diagnostic accuracy and documentation review time. Method Participants diagnosed written clinical cases, half in narrative format, and half in list format. Diagnostic accuracy (defined as including correct case diagnosis among top three diagnoses) and time spent processing the case scenario were measured for each format. Generalised linear mixed regression models and bias-corrected bootstrap percentile confidence intervals for mean paired differences were used to analyse the primary research questions. Results Odds of correctly diagnosing list format notes were 26% greater than with narrative notes. However, there is insufficient evidence that this difference is significant (75% CI 0.8–1.99). On average the list format notes required 85.6 more seconds to process and arrive at a diagnosis compared to narrative notes (95% CI -162.3, −2.77). Of cases where participants included the correct diagnosis, on average the list format notes required 94.17 more seconds compared to narrative notes (75% CI -195.9, −8.83). Conclusion This study offers note format considerations for those interested in improving clinical documentation and suggests directions for future research. Balancing the priority of clinician preference with value of structured data may be necessary. Implications This study provides a method and suggestive results for further investigation in usability of electronic documentation formats.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.