ImportanceNonrandomized studies using insurance claims databases can be analyzed to produce real-world evidence on the effectiveness of medical products. Given the lack of baseline randomization and measurement issues, concerns exist about whether such studies produce unbiased treatment effect estimates.ObjectiveTo emulate the design of 30 completed and 2 ongoing randomized clinical trials (RCTs) of medications with database studies using observational analogues of the RCT design parameters (population, intervention, comparator, outcome, time [PICOT]) and to quantify agreement in RCT-database study pairs.Design, Setting, and ParticipantsNew-user cohort studies with propensity score matching using 3 US claims databases (Optum Clinformatics, MarketScan, and Medicare). Inclusion-exclusion criteria for each database study were prespecified to emulate the corresponding RCT. RCTs were explicitly selected based on feasibility, including power, key confounders, and end points more likely to be emulated with real-world data. All 32 protocols were registered on ClinicalTrials.gov before conducting analyses. Emulations were conducted from 2017 through 2022.ExposuresTherapies for multiple clinical conditions were included.Main Outcomes and MeasuresDatabase study emulations focused on the primary outcome of the corresponding RCT. Findings of database studies were compared with RCTs using predefined metrics, including Pearson correlation coefficients and binary metrics based on statistical significance agreement, estimate agreement, and standardized difference.ResultsIn these highly selected RCTs, the overall observed agreement between the RCT and the database emulation results was a Pearson correlation of 0.82 (95% CI, 0.64-0.91), with 75% meeting statistical significance, 66% estimate agreement, and 75% standardized difference agreement. In a post hoc analysis limited to 16 RCTs with closer emulation of trial design and measurements, concordance was higher (Pearson r, 0.93; 95% CI, 0.79-0.97; 94% meeting statistical significance, 88% estimate agreement, 88% standardized difference agreement). Weaker concordance occurred among 16 RCTs for which close emulation of certain design elements that define the research question (PICOT) with data from insurance claims was not possible (Pearson r, 0.53; 95% CI, 0.00-0.83; 56% meeting statistical significance, 50% estimate agreement, 69% standardized difference agreement).Conclusions and RelevanceReal-world evidence studies can reach similar conclusions as RCTs when design and measurements can be closely emulated, but this may be difficult to achieve. Concordance in results varied depending on the agreement metric. Emulation differences, chance, and residual confounding can contribute to divergence in results and are difficult to disentangle.
Studies that generate real-world evidence on the effects of medical products through analysis of digital data collected in clinical practice provide key insights for regulators, payers, and other healthcare decision-makers. Ensuring reproducibility of such findings is fundamental to effective evidence-based decision-making. We reproduce results for 150 studies published in peer-reviewed journals using the same healthcare databases as original investigators and evaluate the completeness of reporting for 250. Original and reproduction effect sizes were positively correlated (Pearson’s correlation = 0.85), a strong relationship with some room for improvement. The median and interquartile range for the relative magnitude of effect (e.g., hazard ratiooriginal/hazard ratioreproduction) is 1.0 [0.9, 1.1], range [0.3, 2.1]. While the majority of results are closely reproduced, a subset are not. The latter can be explained by incomplete reporting and updated data. Greater methodological transparency aligned with new guidance may further improve reproducibility and validity assessment, thus facilitating evidence-based decision-making. Study registration number: EUPAS19636.
Objectives A peer-support program called Resilience In Stressful Events (RISE) was designed to help hospital staff cope with stressful patient-related events. The aim of this study was to evaluate the impact of the RISE program by conducting an economic evaluation of its cost benefit. Methods A Markov model with a 1-year time horizon was developed to compare the cost benefit with and without the RISE program from a provider (hospital) perspective. Nursing staff who used the RISE program between 2015 and 2016 at a 1000-bed, private hospital in the United States were included in the analysis. The cost of running the RISE program, nurse turnover, and nurse time off were modeled. Data on costs were obtained from literature review and hospital data. Probabilities of quitting or taking time off with or without the RISE program were estimated using survey data. Net monetary benefit (NMB) and budget impact of having the RISE program were computed to determine cost benefit to the hospital. Results Expected model results of the RISE program found a net monetary benefit savings of US $22,576.05 per nurse who initiated a RISE call. These savings were determined to be 99.9% consistent on the basis of a probabilistic sensitivity analysis. The budget impact analysis revealed that a hospital could save US $1.81 million each year because of the RISE program. Conclusions The RISE program resulted in substantial cost savings to the hospital. Hospitals should be encouraged by these findings to implement institution-wide support programs for medical staff, based on a high demand for this type of service and the potential for cost savings.
Background Medical and regulatory communities are increasingly interested in the utility of real-world evidence (RWE) for answering questions pertaining to drug safety and effectiveness but concerns about validity remain. A principled approach to conducting RWE studies may alleviate concerns and increase confidence in findings. This study sought to predict the findings from the PRONOUNCE trial using a principled approach to generating RWE. Methods This propensity-score (PS) matched observational cohort study utilized 3 claims databases to compare the occurrence of major adverse cardiovascular events (MACE) among initiators of degarelix vs. leuprolide. Patients were included if they had history of prostate cancer and atherosclerotic cardiovascular disease. Subjects were excluded if they didn’t have continuous database enrollment in the year prior to treatment initiation, were exposed to androgen deprivation therapy or experienced an acute cardiovascular event within 30 days prior to treatment initiation, or had a history or risk factors of QT prolongation. Results There were 12,448 leuprolide and 1,969 degarelix study-eligible patients before matching, with 1,887 in each arm after PS-matching. The results for MACE comparing degarelix to leuprolide in the observational analysis (hazard ratio= 1.35; 95% confidence interval = 0.94–1.93) was consistent with the subsequently released PRONOUNCE result (hazard ratio = 1.28; 95% confidence interval = 0.59–2.79). Conclusions This study successfully predicted the result of a comparative cardiovascular safety trial in the oncology setting. Although the findings are encouraging, limitations of measuring cancer stage and tumor progression are representative of challenges in attempting to generalize whether claims-based RWE can be used as actionable evidence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.