OBJECTIVE Use machine-learning (ML) algorithms to classify alerts as real or artifacts in online noninvasive vital sign (VS) data streams to reduce alarm fatigue and missed true instability. METHODS Using a 24-bed trauma step-down unit’s non-invasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral oximetry [SpO2]) recorded at 1/20Hz, and noninvasive oscillometric blood pressure [BP] less frequently, we partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were VS deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained ML algorithms. The best model was evaluated on alerts in the test set to enact online alert classification as signals evolve over time. MAIN RESULTS The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve (AUC) performance of 0.79 (95% CI 0.67-0.93) for SpO2 at the instant the VS first crossed threshold and increased to 0.87 (95% CI 0.71-0.95) at 3 minutes into the alerting period. BP AUC started at 0.77 (95%CI 0.64-0.95) and increased to 0.87 (95% CI 0.71-0.98), while RR AUC started at 0.85 (95%CI 0.77-0.95) and increased to 0.97 (95% CI 0.94–1.00). HR alerts were too few for model development. CONCLUSIONS ML models can discern clinically relevant SpO2, BP and RR alerts from artifacts in an online monitoring dataset (AUC>0.87).
Approximately 10-15% of persons living with HIV (PLWH) have a comorbid diagnosis of diabetes mellitus (DM). Both of these long-term chronic conditions are associated with high rates of symptom burden. The purpose of our study was to describe symptom patterns for PLWH with DM (PLWH+DM) using a large secondary dataset. The prevalence, burden, and bothersomeness of symptoms reported by patients in routine clinic visits during 2015 were assessed using the 20-item HIV Symptom Index. Principal component analysis was used to identify symptom clusters. Three main clusters were identified: (a) neurological/psychological, (b) gastrointestinal/flu-like, and (c) physical changes. The most prevalent symptoms were fatigue, poor sleep, aches, neuropathy, and sadness. When compared to a previous symptom study with PLWH, symptoms clustered differently in our sample of patients with dual diagnoses of HIV and diabetes. Clinicians should appropriately assess symptoms for their patients' comorbid conditions.
PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.
This study explored the use of unsupervised machine learning to identify subgroups of patients with heart failure who used telehealth services in the home health setting, and examined intercluster differences for patient characteristics related to medical history, symptoms, medications, psychosocial assessments, and healthcare utilization. Using a feature selection algorithm, we selected seven variables from 557 patients for clustering. We tested three clustering techniques: hierarchical, k-means, and partitioning around medoids. Hierarchical clustering was identified as the best technique using internal validation methods. Intercluster differences among patient characteristics and outcomes were assessed with either χ test or one-way analysis of variance. Ranging in size from 153 to 233 patients, three clusters displayed patterns that differed significantly (P < .05) in patient characteristics of age, sex, medical history of comorbid conditions, use of beta blockers, and quality of life assessment. Significant (P < .001) intercluster differences in number of medications, comorbidities, and healthcare utilization were also revealed. The study identified patterns of association between (1) mental health status, pulmonary disorders, and obesity, and (2) healthcare utilization for patients with heart failure who used telehealth in the home health setting. Study results also revealed a lack of prescription guideline-recommended heart failure medications for the subgroup with the highest proportion of older female adults.
Feature selection techniques show promise towards reducing PHN documentation burden by identifying the most critical data elements needed to predict risk status. Further studies to refine the process of feature selection can aid in informing public health nurses' focus on client-specific and targeted interventions in the delivery of care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.