Objective To create a method for assessing physician performance and care outcomes that are adjusted for procedure acuity and patient comorbidity. Design Between 2004 and 2008 surgical procedures performed by 10 surgeons were stratified into high-acuity procedures (HAPs) and low-acuity procedures (LAPs). Risk adjustment was made for comorbid conditions examined singly or in groups of 2 or more. Setting A tertiary care medical center. Patients A total of 2618 surgical patients. Main Outcome Measures Performance measures included length of stay; return to operating room within 7 days of surgery; and the occurrence of mortality, hospital readmission, transfusion, and wound infection within 30 days of surgery. Results The transfusion rate was 2.7% and 40.6% for LAPs and HAPs, respectively. Wound infection rates were 1.4% for LAPs vs 14.1% for HAPs, while 30-day mortality rate was 0.3% and 1.6% for LAPs and HAPs, respectively. The mean (SD) hospital stay for LAPs was 2.1 (3.6) vs 10.5 (7.0) days for HAPs. Negative performance factors were significantly higher for patients who underwent HAPs and had comorbid conditions. Differences among surgeons significantly affect the incidence of negative performance indicators. Factors affecting performance measures were procedure acuity, the surgeon, and comorbidity, in order of decreasing significance. Surgeons were ranked low, middle, and high based on negative performance indicators. Conclusions Performance measures following oncologic procedures were significantly affected by comorbid conditions and by procedure acuity. Although the latter most strongly affects quality and performance indicators, both should weigh heavily in physician comparisons. The incidence of negative performance indicators was also influenced by the individual surgeon. These data may serve as a tool to evaluate and improve physician performance and outcomes and to develop risk-adjusted benchmarks. Ultimately, reimbursement may be tied to quantifiable measures of physician and institutional performance.
PURPOSE: Patients have been increasingly using physician-rating websites (PRWs); however, few studies have analyzed the validity of star ratings on PRWs. We aimed to compare PRW patient satisfaction scores with internally generated patient satisfaction scores (internal scores) of physicians at a large quaternary cancer center. METHODS: We collected internal scores and PRW scores for physicians at MD Anderson Cancer Center. Internal scores were based on patient responses to the Clinician and Group Consumer Assessment of Healthcare Providers and Systems patient experience (CG-CAHPS) survey. Only physicians with an internal score on the basis of ≥ 30 patient reviews were included. The median numbers of reviews and median scores were compared between internal data and four PRWs (Google, HealthGrades, Vitals, and WebMD). Both internally and on PRWs, possible scores ranged from 1 (least satisfied) to 5 (most satisfied). RESULTS: Of 640 physicians with an internal score, 510 (79.7%) met the inclusion criteria. For these 510 physicians, the median (IQR) number of internal reviews was 49.5 (30-93) and the median (IQR) internal score was 4.89 (4.81-4.93); the median number of reviews on PRWs ranged from 2 to 7, and the median score on PRWs ranged from 4.40 to 5.00. No physician had an internal score < 4, but the proportions with score < 4 on PRWs ranged from 16% to 30%. CONCLUSION: Internal patient satisfaction scores were higher and calculated from more reviews than PRW patient satisfaction scores and correlated weakly with PRW scores. Given that patients rely on PRWs when evaluating potential physicians, we recommend publishing internal scores online to give patients more complete information regarding physician performance.
280 Background: QOPI measures fall into one of three categories: (1) core, (2) disease-specific (breast, colon, lung, non-Hodgkin lymphoma, gynecologic cancers), or (3) domain-specific (symptom control, end-of-life). For each data collection period (DCP), participating sites choose to submit data in at least one disease- or domain-specific module. Charts are identified and abstracted for the selected module(s) based on eligibility criteria. The same charts are also abstracted for core measures. Our group hypothesized that the case mix resulting from choice of module(s) would impact performance for a subset of core measures. Methods: The MD Anderson Regional Care Centers have participated in QOPI over nine DCPs from Fall 2009 to Spring 2014. Unexplained variation was identified in staging documentation (core measure 2) and chemotherapy intent documentation (core measure 10). For each DCP, QOPI chart-level data were reviewed. Adherence for each measure was tabulated and stratified by tumor type. Due to small sample sizes within each DCP, data were pooled and analyzed with descriptive statistics and chi-square testing. Results: Over nine DCPs, stage and chemotherapy intent were documented in 89.1% and 81.3% of charts, respectively. There was a significant association between tumor type and documentation of stage (χ2(4) = 30.4, N=727, p <.001) and chemotherapy intent (χ2(4) = 157.5, N=534, p <.001). Documentation of stage and chemotherapy intent was highest for breast (100%, 93.6%) and colorectal cancers (92.7%, 92.1%) and lowest for NHL (71.8% 32.8%). Conclusions: Observed variation in documentation of stage and chemotherapy intent was primarily due to tumor type. Reasons for this observation are myriad and likely include factors related to the providers, the practice, the measures, and differing complexity of tumor types. This variation in quality scores by tumor type (driven by module selection) could have significant implications in today’s pay for performance environment. [Table: see text]
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.