Objective
To assess problem list completeness using an objective measure across a range of sites, and to identify success factors for problem list completeness.
Methods
We conducted a retrospective analysis of electronic health record data and interviews at ten healthcare organizations within the United States, United Kingdom, and Argentina who use a variety of electronic health record systems: four self-developed and six commercial. At each site, we assessed the proportion of patients who have diabetes recorded on their problem list out of all patients with a hemoglobin A1c elevation >= 7.0%, which is diagnostic of diabetes. We then conducted interviews with informatics leaders at the four highest performing sites to determine factors associated with success. Finally, we surveyed all the sites about common practices implemented at the top performing sites to determine whether there was an association between problem list management practices and problem list completeness.
Results
Problem list completeness across the ten sites ranged from 60.2% to 99.4%, with a mean of 78.2%. Financial incentives, problem-oriented charting, gap reporting, shared responsibility, links to billing codes, and organizational culture were identified as success factors at the four hospitals with problem list completeness at or near 90.0%.
Discussion
Incomplete problem lists represent a global data integrity problem that could compromise quality of care and put patients at risk. There was a wide range of problem list completeness across the healthcare facilities. Nevertheless, some facilities have achieved high levels of problem list completeness, and it is important to better understand the factors that contribute to success to improve patient safety.
Conclusion
Problem list completeness varies substantially across healthcare facilities. In our review of EHR systems at ten healthcare facilities, we identified six success factors which may be useful for healthcare organizations seeking to improve the quality of their problem list documentation: financial incentives, problem oriented charting, gap reporting, shared responsibility, links to billing codes, and organizational culture.
Objective: The United States Office of the National Coordinator for Health Information Technology sponsored the development of a “high-priority” list of drug-drug interactions (DDIs) to be used for clinical decision support. We assessed current adoption of this list and current alerting practice for these DDIs with regard to alert implementation (presence or absence of an alert) and display (alert appearance as interruptive or passive).Materials and methods: We conducted evaluations of electronic health records (EHRs) at a convenience sample of health care organizations across the United States using a standardized testing protocol with simulated orders.Results: Evaluations of 19 systems were conducted at 13 sites using 14 different EHRs. Across systems, 69% of the high-priority DDI pairs produced alerts. Implementation and display of the DDI alerts tested varied between systems, even when the same EHR vendor was used. Across the drug pairs evaluated, implementation and display of DDI alerts differed, ranging from 27% (4/15) to 93% (14/15) implementation.Discussion: Currently, there is no standard of care covering which DDI alerts to implement or how to display them to providers. Opportunities to improve DDI alerting include using differential displays based on DDI severity, establishing improved lists of clinically significant DDIs, and thoroughly reviewing organizational implementation decisions regarding DDIs.Conclusion: DDI alerting is clinically important but not standardized. There is significant room for improvement and standardization around evidence-based DDIs.
These 47 best practices represent an ideal situation. The research identifies the balance between importance and difficulty, highlights the challenges faced by organizations seeking to implement CDS, and describes several opportunities for future research to reduce alert malfunctions.
A review of 10 CPOE systems revealed that medication names were displayed inconsistently, which can result in confusion or errors in reviewing, selecting, and ordering medications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.