Clinical prediction models (CPMs) can inform decision making about treatment initiation, which requires predicted risks assuming no treatment is given. However, this is challenging since CPMs are usually derived using data sets where patients received treatment, often initiated postbaseline as “treatment drop‐ins.” This study proposes the use of marginal structural models (MSMs) to adjust for treatment drop‐in. We illustrate the use of MSMs in the CPM framework through simulation studies that represent randomized controlled trials and real‐world observational data and the example of statin initiation for cardiovascular disease prevention. The simulations include a binary treatment and a covariate, each recorded at two timepoints and having a prognostic effect on a binary outcome. The bias in predicted risk was examined in a model ignoring treatment, a model fitted on treatment‐naïve patients (at baseline), a model including baseline treatment, and the MSM. In all simulation scenarios, all models except the MSM underestimated the risk of outcome given absence of treatment. These results were supported in the statin initiation example, which showed that ignoring statin initiation postbaseline resulted in models that significantly underestimated the risk of a cardiovascular disease event occurring within 10 years. Consequently, CPMs that do not acknowledge treatment drop‐in can lead to underallocation of treatment. In conclusion, when developing CPMs to predict treatment‐naïve risk, researchers should consider using MSMs to adjust for treatment drop‐in, and also seek to exploit the ability of MSMs to allow estimation of individual treatment effects.
The objective of this study was to assess the reliability of individual risk predictions based on routinely collected data considering the heterogeneity between clinical sites in data and populations. Cardiovascular disease (CVD) risk prediction with QRISK3 was used as exemplar. The study included 3.6 million patients in 392 sites from the Clinical Practice Research Datalink. Cox models with QRISK3 predictors and a frailty (random effect) term for each site were used to incorporate unmeasured site variability. There was considerable variation in data recording between general practices (missingness of body mass index ranged from 18.7% to 60.1%). Incidence rates varied considerably between practices (from 0.4 to 1.3 CVD events per 100 patient-years). Individual CVD risk predictions with the random effect model were inconsistent with the QRISK3 predictions. For patients with QRISK3 predicted risk of 10%, the 95% range of predicted risks were between 7.2% and 13.7% with the random effects model. Random variability only explained a small part of this. The random effects model was equivalent to QRISK3 for discrimination and calibration. Risk prediction models based on routinely collected health data perform well for populations but with great uncertainty for individuals. Clinicians and patients need to understand this uncertainty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.