PURPOSE: The Oncology Care Model (OCM) is Medicare’s first alternative payment model program for patients with cancer. As of October 2017, participating practices were required to report biomarker testing of patients with advanced non–small-cell lung cancer (aNSCLC). Our objective was to evaluate the effect of this OCM reporting requirement on quality of care. METHODS: We selected patients with aNSCLC receiving care in practices in a nationwide de-identified electronic health record-derived database. We used an adjusted difference-in-differences (DID) logistic regression model to compare changes in biomarker testing rates (EGFR, ROS1, and ALK) and receipt of biomarker-guided therapy between patients in OCM versus non-OCM practices, before and after OCM implementation. RESULTS: The analysis included 14,048 patients from 45 OCM practices (n = 8,151) and 105 non-OCM practices (n = 5,897). The overall unadjusted rates for biomarker testing and receipt of biomarker-guided therapy increased over the study period (2011-2018) in both OCM (55.5% v 71.6%; 89.8% v 94.6%, respectively) and non-OCM (55.2% v 69.7%; 90.1% v 95.2%, respectively) practices. In the adjusted DID model, the rates of biomarker testing (odds ratio [OR], 1.09 [95% CI, 0.88 to 1.34]; P = .45) and receipt of biomarker-guided therapy (OR, 0.87 [95% CI, 0.52 to 1.45]; P = .58) were similar between OCM and non-OCM practices. CONCLUSION: OCM biomarker documentation and reporting requirements did not appear to increase the proportions of patients with aNSCLC who underwent testing or who received biomarker-guided therapy in OCM versus non-OCM practices.
1539 Background: Clinical trial eligibility increasingly requires information found in NGS tests; lack of structured NGS results hinders the automation of trial matching for this criterion, which may be a deterrent to open biomarker-driven trials in certain sites. We developed a machine learning tool that infers the presence of NGS results in the EHR, facilitating clinical trial matching. Methods: The Flatiron Health EHR-derived database contains patient-level pathology and genetic counseling reports from community oncology practices. An internal team of clinical experts reviewed a random sample of patients across this network to generate labels of whether each patient had been NGS tested. A supervised ML model was trained by scanning documents in the EHR and extracting n-gram features from text snippets surrounding relevant keywords (i.e. 'Lung biomarker', 'Biomarker negative'). Through k-fold cross-validation and l2-regularization, we found that a logistic regression was able to classify patients' NGS testing status. The model's offline performance on a 20% hold-out test set was measured with standard classification metrics: sensitivity, specificity, positive predictive value (PPV) and NPV. In an online setting, we integrated the tool into Flatiron's clinical trial matching software OncoTrials by including in each patient's profile an indicator of "likely NGS tested" or "unlikely NGS tested" based on the classifier's prediction. For patients inferred as tested, the model linked users to a test report view in the EHR. In this online setting, we measured sensitivity and specificity of the model after user review in two community oncology practices. Results: This NGS testing status inference model was characterized using a test sample of 15,175 patients. The model sensitivity and specificity (95%CI) were 91.3% (90.2, 92.3) and 96.2% (95.8, 96.5), respectively; PPV was 84.5% (83.2, 85.8) and NPV was 98.0% (97.7, 98.2). In the validation sample (N = 200 originated from 2 distinct care sites), users identified NGS testing status with a sensitivity of 95.2% (88.3%, 98.7%). Conclusions: This machine learning model facilitates the screening for potential patient enrollment in biomarker-driven trials by automatically surfacing patients with NGS test results at high sensitivity and specificity into a trial matching application to identify candidates. This tool could mitigate a key barrier for participation in biomarker-driven trials for community clinics.
6620 Background: The OCM is a voluntary Center for Medicare and Medicaid Innovation alternative payment model pilot program. As of Oct 2017, OCM practices are required to report the biomarker status for NSCLC pts. Our objective was to assess the effect of OCM reporting on quality of care in aNSCLC. Methods: We developed a decision-analytic model to compare the likelihood of receiving biomarker testing and corresponding appropriate therapy. We populated the model using real-world data from pts (n=7,075) at OCM sites (n=45) and non-OCM sites (n=105) in the Flatiron Health electronic health record (EHR)-derived database. The pre-period control included pts diagnosed with aNSCLC from Jan 1, 2011 - Dec 31, 2015; the post-period included pts diagnosed Oct 2017 - Nov 2018. For OCM vs non-OCM sites, we estimated probabilities and unadjusted odds ratio (OR) of biomarker testing (EGFR, ROS1, or ALK) and subsequent delivery of appropriate therapy, defined as use of a biomarker-guided tyrosine kinase inhibitor (TKI) for positive pts, or non-TKI therapy for negative pts. Results: No differences in rates of biomarker testing or delivery of appropriate therapy were detected between OCM and non-OCM practices prior to the reporting requirement. In the post-period, OCM was associated with higher odds of biomarker testing and appropriate therapy (Table). Conclusions: To our knowledge, this is the first study of the association of OCM reporting requirements with downstream quality of care. Our results suggest that OCM documentation and reporting requirements are associated with modestly higher quality of care for pts with aNSCLC. Ongoing sensitivity analyses will determine the relative contribution of provider and practice characteristics to these findings. Careful measurement of the impact of reporting requirements is essential to measure the impact of payment reform interventions and inform policy. [Table: see text]
2051 Background: Efforts to facilitate patient identification for clinical trials in routine practice, such as automating electronic health record (EHR) data reviews, are hindered by the lack of information on metastatic status in structured format. We developed a machine learning tool that infers metastatic status from unstructured EHR data, and we describe its real-world implementation. Methods: This machine learning model scans EHR documents, extracting features from text snippets surrounding key words (ie, ‘Metastatic’ ‘Progression’ ‘Local’). A regularized logistic regression model was trained, and used to classify patients across 5 metastatic status inference categories: highly-likely and likely positive, highly-likely and likely negative, and unknown. The model accuracy was characterized using the Flatiron Health EHR-derived de-identified database of patients with solid tumors, where manually abstracted information served as standard accurate reference. We assessed model accuracy using sensitivity and specificity (patients in the ‘unknown’ category omitted from numerator), negative and positive predictive values (NPV, PPV; patients ‘unknown’ included in denominator), and its performance in a real-world dataset. In a separate validation, we evaluated the accuracy gained upon additional user review of the model outputs after integration of this tool into workflows. Results: This metastatic status inference model was characterized using a sample of 66,532 patients. The model sensitivity and specificity (95%CI) were 82.% (82, 83) and 95% (95, 96), respectively; PPV was 89% (89, 90) and NPV was 94% (94, 94). In the validation sample (N = 200 originated from 5 distinct care sites), and after user review of model outputs, values increased to 97% (85, 100) for sensitivity, 98% (95, 100) for specificity, 92 (78, 98) for PPV and 99% (97, 100) for NPV. The model assigned 163/200 patients to the highly-likely categories, which were deemed not to require further EHR review by users. The prevalence of errors was 4% without user review, and 2% after user review. Conclusions: This machine learning model infers metastatic status from unstructured EHR data with high accuracy. The tool assigns metastatic status with high confidence in more than 75% of cases without requiring additional manual review, allowing more efficient identification of clinical trial candidates and clinical trial matching, thus mitigating a key barrier for clinical trial participation in community clinics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.