BACKGROUND: Although many patient, physician, and payment predictors of adherence have been described, knowledge of their relative strength and overall ability to explain adherence is limited.OBJECTIVES: To measure the contributions of patient, physician, and payment predictors in explaining adherence to statins RESEARCH DESIGN: Retrospective cohort study using administrative data SUBJECTS: 14,257 patients insured by Horizon Blue Cross Blue Shield of New Jersey (BCBSNJ) who were newly prescribed a statin cholesterol-lowering medication MEASURES: Adherence to statin medication was measured during the year after the initial prescription, based on proportion of days covered (PDC). The impact of patient, physician, and payment predictors of adherence were evaluated using multivariate logistic regression. The explanatory power of these models was evaluated with C statistics, a measure of the goodness of fit.RESULTS: Overall, 36.4% of patients were fully adherent. Older patient age, male gender, lower neighborhood percent black composition, higher median income, and fewer number of emergency department (ED) visits were significant patient predictors of adherence. Having a statin prescribed by a cardiologist, a patient's primary care physician, or a US medical graduate were significant physician predictors of adherence. Lower copayments also predicted adherence. All of our models had low explanatory power. Multivariate models including patient covariates only had greater explanatory power (C = 0.613) than models with physician variables only (C = 0.566) or copayments only (C = 0.543). A fully specified model had only slightly more explanatory power (C = 0.633) than the model with patient characteristics alone.CONCLUSIONS: Despite relatively comprehensive claims data on patients, physicians, and out-ofpocket costs, our overall ability to explain adherence remains poor. Administrative data likely do not capture many complex mechanisms underlying adherence.
BACKGROUND The Relative Value Scale Update Committee (RUC) of the American Medical Association plays a central role in determining physician reimbursement. The RUC’s role and performance have been criticized but subjected to little empirical evaluation. METHODS We analyzed the accuracy of valuations of 293 common surgical procedures from 2005 through 2015. We compared the RUC’s estimates of procedure time with “benchmark” times for the same procedures derived from the clinical registry maintained by the American College of Surgeons National Surgical Quality Improvement Program (NSQIP). We characterized inaccuracies, quantified their effect on physician revenue, and examined whether re-review corrected them. RESULTS At the time of 108 RUC reviews, the mean absolute discrepancy between RUC time estimates and benchmark times was 18.5 minutes, or 19.8% of the RUC time. However, RUC time estimates were neither systematically shorter nor longer than benchmark times overall (β, 0.97; 95% confidence interval, 0.94 to 1.01; P = 0.10). Our analyses suggest that whereas orthopedic surgeons and urologists received higher payments than they would have if benchmark times had been used ($160 million and $40 million more, respectively, in Medicare reimbursement in 2011 through 2015), cardiothoracic surgeons, neurosurgeons, and vascular surgeons received lower payments ($130 million, $60 million, and $30 million less, respectively). The accuracy of RUC time estimates improved in 47% of RUC revaluations, worsened in 27%, and was unchanged in 25%. (Percentages do not sum to 100 because of rounding.) CONCLUSIONS In this analysis of frequently conducted operations, we found substantial absolute discrepancies between intraoperative times as estimated by the RUC and the times recorded for the same procedures in a surgical registry, but the RUC did not systematically overestimate or underestimate times. (Funded by the National Institutes of Health.)
Physicians, judges, teachers, and agents in many other settings differ systematically in the decisions they make when faced with similar cases. Standard approaches to interpreting and exploiting such differences assume they arise solely from variation in preferences. We develop an alternative framework that allows variation in both preferences and diagnostic skill, and show that both dimensions are identified in standard settings under quasi-random assignment. We apply this framework to study pneumonia diagnoses by radiologists. Diagnosis rates vary widely among radiologists, and descriptive evidence suggests that a large component of this variation is due to differences in diagnostic skill. Our estimated model suggests that radiologists view failing to diagnose a patient with pneumonia as more costly than incorrectly diagnosing one without, and that this leads less-skilled radiologists to optimally choose lower diagnosis thresholds. Variation in skill can explain 44 percent of the variation in diagnostic decisions, and policies that improve skill perform better than uniform decision guidelines. Failing to account for skill variation can lead to highly misleading results in research designs that use agent assignments as instruments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.