BACKGROUND: No-shows, a major issue for healthcare centers, can be quite costly and disruptive. Capacity is wasted and expensive resources are underutilized. Numerous studies have shown that reducing uncancelled missed appointments can have a tremendous impact, improving efficiency, reducing costs and improving patient outcomes. Strategies involving machine learning and artificial intelligence could provide a solution. OBJECTIVE: Use artificial intelligence to build a model that predicts no-shows for individual appointments. DESIGN: Predictive modeling. SETTING: Major tertiary care center. PATIENTS AND METHODS: All historic outpatient clinic scheduling data in the electronic medical record for a one-year period between 01 January 2014 and 31 December 2014 were used to independently build predictive models with JRip and Hoeffding tree algorithms. MAIN OUTCOME MEASURES: No show appointments. SAMPLE SIZE: 1 087 979 outpatient clinic appointments. RESULTS: The no show rate was 11.3% (123 299). The most important information-gain ranking for predicting no-shows in descending order were history of no shows (0.3596), appointment location (0.0323), and specialty (0.025). The following had very low information-gain ranking: age, day of the week, slot description, time of appointment, gender and nationality. Both JRip and Hoeffding algorithms yielded a reasonable degrees of accuracy 76.44% and 77.13%, respectively, with area under the curve indices at acceptable discrimination power for JRip at 0.776 and at 0.861 with excellent discrimination for Hoeffding trees. CONCLUSION: Appointments having high risk of no-shows can be predicted in real-time to set appropriate proactive interventions that reduce the negative impact of no-shows. LIMITATIONS: Single center. Only one year of data. CONFLICT OF INTEREST: None.
Clinicians urgently need reliable and stable tools to predict the severity of COVID-19 infection for hospitalized patients to enhance the utilization of hospital resources and supplies. Published COVID-19 related guidelines are frequently being updated, which impacts its utilization as a stable go-to resource for informing clinical and operational decision-making processes. In addition, many COVID-19 patient-level severity prediction tools that were developed during the early stages of the pandemic failed to perform well in the hospital setting due to many challenges including data availability, model generalization, and clinical validation. This study describes the experience of a large tertiary hospital system network in the Middle East in developing a real-time severity prediction tool that can assist clinicians in matching patients with appropriate levels of needed care for better management of limited health care resources during COVID-19 surges. It also provides a new perspective for predicting patients’ COVID-19 severity levels at the time of hospital admission using comprehensive data collected during the first year of the pandemic in the hospital. Unlike many previous studies for a similar population in the region, this study evaluated 4 machine learning models using a large training data set of 1386 patients collected between March 2020 and April 2021. The study uses comprehensive COVID-19 patient-level clinical data from the hospital electronic medical records (EMR), vital sign monitoring devices, and Polymerase Chain Reaction (PCR) machines. The data were collected, prepared, and leveraged by a panel of clinical and data experts to develop a multi-class data-driven framework to predict severity levels for COVID-19 infections at admission time. Finally, this study provides results from a prospective validation test conducted by clinical experts in the hospital. The proposed prediction framework shows excellent performance in concurrent validation (n=462 patients, March 2020–April 2021) with highest discrimination obtained with the random forest classification model, achieving a macro- and micro-average area under receiver operating characteristics curve (AUC) of 0.83 and 0.87, respectively. The prospective validation conducted by clinical experts (n=185 patients, April–May 2021) showed a promising overall prediction performance with a recall of 78.4–90.0% and a precision of 75.0–97.8% for different severity classes.
Background/purpose: The electronic clinical decision support system (CDSS) is mainly used to assist health care providers in their decision-making process. CDSS includes the dose range checking (DRC) tool. This study aims to evaluate the clinical validity of the DRC tool and compare it to the institutional Formulary and Drug Therapy Guide powered by Lexi-Comp. Methods: This retrospective study analyzed DRC alerts in the inpatient setting. Alerts were assessed for their clinical validity when compared to recommendations of the institution’s formulary. Relevant data regarding patient demographics and characteristics were collected. A sample size of 3000 DRC alerts was needed to give a margin of error of 1% (using normal approximation to binomial distribution gives 30.26/3000 = 1%). Results: In our cohort, 1659 (55%) of the DRC alerts were generated for adult patients. A total of 1557 (52%) of all medication-related DRC alerts recommended renal dose adjustments, while 708 (24%) needed hepatic dose adjustments. Majority of alerts, 2844 (95%), were clinically invalid. A total of 2892 (96%) alerts were overridden by prescribers. In 997 (33%) cases, there was an overdose relative to the recommended dose, and in 1572 (52%) there was underdosing. Residents were more likely to accept the DRC alerts compared with other health provider categories ( P < .001). Conclusion: Using DRC as a clinical decision support tool with minimal integration yielded serious clinically invalid recommendations. This could increase medication-prescribing errors and lead to alert fatigue in electronic health care systems.
Purpose To describe the usefulness of an innovative “semi–real-time” pharmacy dashboard in managing workload during the unpredictable coronavirus disease 2019 (COVID-19) pandemic. Summary We created a pharmacy dashboard to monitor workload and key performance indicators during the dynamic COVID-19 crisis. The dashboard accessed the prescribing workload from our clinical information system and filled prescriptions from robotic prescription dispensing systems. The aggregated data was visualized using modern tools. The dashboard presents performance data in near real time and is updated every 15 minutes. After validation during the early weeks of the COVID-19 crisis, the dashboard provided reliable data and served as a great decision support aid in calculating the backlog of prescribed but unfilled prescriptions. It also aided in adjusting manpower, identifying prescribing and dispensing patterns, identifying trends, and diverting staff resources to appropriate locations. The dashboard has been useful in clearing the backlog in a timely manner, staff planning, and predicting the next coming surge so that we can proactively minimize accumulation of backlogged prescriptions. Conclusion Developing a dynamic, semi–real-time pharmacy dashboard during unstable circumstances such as those that have arisen during the COVID-19 pandemic can be very useful in ambulatory care pharmacy workload management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.