BackgroundAn increasing number of clinical trials are conducted in primary care settings. Making better use of existing data in the electronic health records to identify eligible subjects can improve efficiency of such studies. Our study aims to quantify the proportion of eligibility criteria that can be addressed with data in electronic health records and to compare the content of eligibility criteria in primary care with previous work.MethodsEligibility criteria were extracted from primary care studies downloaded from the UK Clinical Research Network Study Portfolio. Criteria were broken into elemental statements. Two expert independent raters classified each statement based on whether or not structured data items in the electronic health record can be used to determine if the statement was true for a specific patient. Disagreements in classification were discussed until 100 % agreement was reached. Statements were also classified based on content and the percentages of each category were compared to two similar studies reported in the literature.ResultsEligibility criteria were retrieved from 228 studies and decomposed into 2619 criteria elemental statements. 74 % of the criteria elemental statements were considered likely associated with structured data in an electronic health record. 79 % of the studies had at least 60 % of their criteria statements addressable with structured data likely to be present in an electronic health record. Based on clinical content, most frequent categories were: “disease, symptom, and sign”, “therapy or surgery”, and “medication” (36 %, 13 %, and 10 % of total criteria statements respectively). We also identified new criteria categories related to provider and caregiver attributes (2.6 % and 1 % of total criteria statements respectively).ConclusionsElectronic health records readily contain much of the data needed to assess patients’ eligibility for clinical trials enrollment. Eligibility criteria content categories identified by our study can be incorporated as data elements in electronic health records to facilitate their integration with clinical trial management systems.
Implementation of an insulin calculator tool streamlined ordering and administration of insulin in a pediatric academic institution while maintaining patient safety. Users indicated high overall satisfaction with the tool.
Objective: To adjust for dynamic insulin requirements in critically ill patients, intravenous (IV) insulin infusions allow for titration of the dose according to a prespecified algorithm. Despite the adaptability of IV insulin protocols, human involvement in dose calculation allows for error. We integrated a previously validated IV insulin calculator into our electronic health record (Epic) and instituted it in the cardiovascular intensive care unit (CVICU). We aim to describe the design of the calculator, the implementation process, and evaluate the calculator’s impact. Method: Employing an aggressive training program and user acceptance testing prior to significant elbow support at the time of institution, we successfully integrated the insulin calculator in our CVICU. We evaluated the glucometrics before and after implementation as well as nursing satisfaction following calculator implementation. Results: Overall, our implementation led to increased frequency of blood sugar at various glycemic targets, a trend toward less hypoglycemia or hyperglycemia. For severe hypoglycemia, our preintervention cohort had 0.02% of blood sugars less than 40 mg/dL but no blood sugars less than 40 mg/dL were identified in our patient’s postintervention. For the CVICU target blood glucose of 70-180 mg/dL, 87.97% blood sugars at baseline met goal compared to 91.39% at one month, 91.24% at three months, and 90.87% at six months postintervention. Conclusion: By utilizing an aggressive education campaign championing superusers and making adjustments to the calculator based on early problems that were encountered, we were able to improve glycemic control and limit glucose variability at our institution.
Multiple groups have described strategies for clinical implementation of pharmacogenetics (PGx) that often include internal laboratory tests that are specifically developed for their implementation needs. However, many institutions are not able to follow this practice and instead must utilize external laboratories to obtain PGx testing results. As each external laboratory might have different ordering and reporting workflows, consistent reporting and storing of PGx results within the medical record can be a challenge. This might result in patient safety concerns as important PGx information might not be easily identifiable at the point of current or future prescribing. Herein, we describe initial PGx clinical implementation efforts at a large academic medical center, focusing on optimizing three different test ordering workflows and two distinct result reporting strategies. From this, we identified common issues such as variable reporting location and structure of PGx results, as well as duplicate PGx testing. We identified several opportunities to optimize our current processes, including—(1) PGx laboratory stewardship, (2) increasing visibility of PGx tests, and (3) clinician and patient education. Key to the success was the importance of engaging clinician, informatics, and pathology stakeholders, as we developed interventions to improve our PGX implementation processes.
Background Many patients with atrial fibrillation (AF) remain undiagnosed despite availability of interventions to reduce stroke risk. Predictive models to date are limited by data requirements and theoretical usage. We aimed to develop a model for predicting the 2-year probability of AF diagnosis and implement it as proof-of-concept (POC) in a production electronic health record (EHR). Methods We used a nested case–control design using data from the Indiana Network for Patient Care. The development cohort came from 2016 to 2017 (outcome period) and 2014 to 2015 (baseline). A separate validation cohort used outcome and baseline periods shifted 2 years before respective development cohort times. Machine learning approaches were used to build predictive model. Patients ≥ 18 years, later restricted to age ≥ 40 years, with at least two encounters and no AF during baseline, were included. In the 6-week EHR prospective pilot, the model was silently implemented in the production system at a large safety-net urban hospital. Three new and two previous logistic regression models were evaluated using receiver-operating characteristics. Number, characteristics, and CHA2DS2-VASc scores of patients identified by the model in the pilot are presented. Results After restricting age to ≥ 40 years, 31,474 AF cases (mean age, 71.5 years; female 49%) and 22,078 controls (mean age, 59.5 years; female 61%) comprised the development cohort. A 10-variable model using age, acute heart disease, albumin, body mass index, chronic obstructive pulmonary disease, gender, heart failure, insurance, kidney disease, and shock yielded the best performance (C-statistic, 0.80 [95% CI 0.79–0.80]). The model performed well in the validation cohort (C-statistic, 0.81 [95% CI 0.8–0.81]). In the EHR pilot, 7916/22,272 (35.5%; mean age, 66 years; female 50%) were identified as higher risk for AF; 5582 (70%) had CHA2DS2-VASc score ≥ 2. Conclusions Using variables commonly available in the EHR, we created a predictive model to identify 2-year risk of developing AF in those previously without diagnosed AF. Successful POC implementation of the model in an EHR provided a practical strategy to identify patients who may benefit from interventions to reduce their stroke risk.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.