Purpose To evaluate the reliability and validity of the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) Patient-Centered Medical Home Survey. Methods We conducted a field test of the CAHPS patient-centered medical home (PCMH) survey with 2,740 adults collected by mail (n = 1,746), phone (n = 672), and web (n = 322) from 6 sites of care affiliated with a west-coast staff model health maintenance organization. Findings An overall response rate of 37% was obtained. Internal consistency reliability estimates for 7 multi-item scales were as follows: access to care (5 items, alpha = 0.79), communication with providers (6 items, alpha = 0.93), office staff courtesy and respect (2 items, alpha = 0.80), shared decision-making about medicines (3 items, alpha = 0.67), self-management support (2 items, alpha = 0.61), attention to mental health issues (3 items, alpha = 0.80), and care coordination (4 items, alpha = 0.58). The number of responses needed to get reliable information at the site of care level for the composites was generally acceptable (< 300 for 0.70 reliability-level) except for self-management support and shared decision-making about medicines. Item-scale correlations provided support for distinct composites except for access to care and shared decision-making about medicines, which overlapped with the communication with providers scale. Shared decision-making and self-management support were significantly uniquely associated with the global rating of the provider (dependent variable) along with access and communication in a multiple regression model. Implications This study provides further support for the reliability and validity of the CAHPS PCMH survey, but refinement of the self-management support and shared decision-making scales is needed. The survey can be used to provide information about the performance of different health plans on multiple domains of health care, but future efforts to improve some of the survey items is needed.
We randomized half of the sample from two southern California medical centers to a post-paid incentive (n = 1,795) and half to no incentive (n = 1,797) for completing a web-based survey about their experiences with health care. Respondents in the incentive group were given the choice between a $5 cash or Target® e-certificate. The characteristics of respondents in the incentive and control groups was similar on age, education, length of membership in the plan, number of emails sent and visits to the primary care doctor in the 12 months prior to sampling, and their global rating of the doctor; the incentive group had more Asians (8% vs. 5%, χ 2 (1 df) = 7.92, p = 0.005) and fewer Blacks/African Americans (2% vs. 4%, χ 2 (1 df) = 11.0, p =0.001) than the no incentive group. Those randomized to the incentive were significantly more likely to respond to the survey than those in the control group (57% vs. 50%, t (df = 3590) = 4.06, p <0.0001). Item non response rates were similar for those in the incentive and the control groups. Those randomized to the incentive condition who completed the survey were more likely to prefer a cash incentive over the e-certificate (69% of the incentives delivered to web respondents were in the form of cash). The unit cost per incentive was $8.32 for cash and $7.49 for the e-certificate. The results of this experiment indicate that a post-paid incentive can significantly increase the response rate to a web-based survey.
Background Little is known about whether health information technology (HIT) affects patient experiences with health care. Objective To develop HIT questions that assess patients care experiences not evaluated by existing ambulatory CAHPS measures. Research Design We reviewed published articles and conducted focus groups and cognitive testing to develop survey questions. We collected data, using mail and the internet, from patients of 69 physicians receiving care at an academic medical center and two regional integrated delivery systems in late 2009 and 2010. We evaluated questions and scales about HIT using factor analysis, item-scale correlations, and reliability (internal consistency and physician-level) estimates. Results We found support for three HIT composites: doctor use of computer (2 items), e-mail (2 items), and helpfulness of provider’s website (4 items). Corrected item-scale correlations were 0.37 for the two doctor use of computer items and 0.71 for the two e-mail items, and ranged from 0.50 to 0.60 for the provider’s website items. Cronbach’s alpha was high for e-mail (0.83) and provider’s website (0.75), but only 0.54 for doctor use of computer. As few as 50 responses per physician would yield reliability of 0.70 for e-mail and provider’s website. Two HIT composites, doctor use of computer (p<0.001) and provider’s website (p=0.02), were independent predictors of overall ratings of doctors. Conclusions New CAHPS HIT items were identified that measure aspects of patient experiences not assessed by the CAHPS C&G 1.0 survey.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.