Objective
To evaluate the clinical utility of automatable prediction models for increasing goals-of-care discussions among hospitalized patients at the end-of-life.
Materials and Methods
We developed three Random Forest (RF) models and updated the Modified Hospital One-year Mortality Risk model: alternative models to predict one-year mortality (proxy for EOL status) using admission-time data. Admissions from July 2011-2016 were used for training and those from July 2017-2018 were used for temporal validation. We simulated alerts for admissions in the validation cohort and modelled alternative scenarios where alerts lead to code status orders (CSOs) in the EHR. We linked actual CSOs and calculated the expected risk difference (eRD), the number needed to benefit (NNB) and the net benefit (NB) of each model for the patient-centered outcome of a CSO among EOL hospitalizations.
Results
Models had a C-statistic of 0.79-0.86 among unique patients. A CSO was documented during 2599 of 3773 hospitalizations at the EOL (68.9%). At a threshold that identified 10% of eligible admissions, the eRD ranged from 5.4% to 10.7% (NNB 5.4-10.9 alerts). Under usual care, a CSO had a 34% PPV for EOL status. Using this to inform the relative cost of FPs, only two models improved NB over usual care. A RF model with diagnostic predictors had the highest clinical utility by either measure, including in sensitivity analyses.
Discussion
Automatable prediction models with acceptable temporal validity differed meaningfully in their expected ability to improve patient-centered outcomes over usual care.
Conclusion
Decision-analysis should precede implementation of automated prediction models for improving palliative and EOL care outcomes.