BackgroundAudit and feedback (A&F) is a valuable quality improvement strategy, which can contribute to de-implementation of low-value care. In the Netherlands, all health insurers collaboratively provide A&F to general practitioners (GPs), the ‘Primary Care Practice Report’ (PCPR). Unfortunately, the use of this report by GPs is limited. This study examined the thoughts of GPs on the usability of the PCPR and GPs recommendations for improving the PCPR.MethodWe used an interpretative qualitative design, with think-aloud tasks to uncover thoughts of GPs on the usability of the PCPR and semistructured interview questions to ask GPs’ recommendations for improvement of the PCPR. Interviews were audiorecorded and transcribed ad verbatim. Data were analysed using thematic content analysis.ResultsWe identified two main themes: ‘poor usability of the PCPR’, and ‘minimal motivation to change based on the PCPR’. The GPs found the usability of the PCPR poor due to the feedback not being clinically meaningful, the data not being recent, individual and reliable, the performance comparators offer insufficient guidance to assess clinical performance, the results are not discussed with peers and the definitions and visuals are unclear. The GPs recommended improving these issues. The GPs motivation to change based on the PCPR was minimal.ConclusionsThe GPs evaluated the PCPR as poorly usable and were minimally motivated to change. The PCPR seems developed from the perspective of the reports’ commissioners, health insurers, and does not meet known criteria for effective A&F design and user-centred design. Importantly, the GPs did state that well-designed feedback could contribute to their motivation to improve clinical performance.Furthermore, the GPs stated that they receive a multitude of A&F reports, which they hardly use. Thus, we see a need for policy makers to invest in less, but more usable A&F reports.
Background: Medical practice variation in caesarean section rates is the most studied type of practice variation in the field of obstetrics and gynaecology. This has not resulted in increased homogeneity of treatment between geographic areas or healthcare providers. Our study aim was to evaluate whether current study designs on medical practice variation of caesarean section rates were optimized to identify the unwarranted share of practice variation and could contribute to the reduction of unwarranted practice variation by meeting criteria for audit and feedback. Methods: We searched PubMed, Embase, EBSCO/CINAHL and Wiley/Cochrane Library from inception to March 24th, 2020. Studies that compared the rate of caesarean sections between individuals, institutions or geographic areas were included. Study design was assessed on: selection procedure of study population, data source, case-mix correction, patient preference, aggregation level of analysis, maternal and neonatal outcome, and determinants (professional and organizational characteristics). Results: A total of 284 studies were included. Most studies (64%) measured the caesarean section rate in the entire study population instead of using a sample (30%). (National) databases were most often used as information source (57%). Case-mix correction was performed in 87 studies (31%). The Robson classification was used in 20% of the studies following its endorsement by the WHO in 2015. The most common levels of aggregation were hospital level (35%) and grouped hospitals (35%) e.g. private versus public. The percentage of studies that assessed the relationship between variation in caesarean section rates and maternal outcome was 9%, neonatal outcome 19%, determinants (professional and organizational characteristics) 21% and patient preference 2%. Conclusions: Study designs of practice variation in caesarean sections varied considerably, raising questions about their appropriateness. Studies focused on measuring practice variation, rather than contributing to the reduction of unwarranted practice variation. Future studies should correct for differences in patient characteristics (case-mix) and patient preference to identify unwarranted practice variation. Practice variation studies could be used for audit and feedback if results are presented at lower levels of aggregation, and appeal to intrinsic motivation of physicians, for example by including the health effects on mother and child.
Background Audit and feedback informs healthcare providers and may affect professional practice and patient outcomes. The Primary Care Practice Report (PCPR) is a web based personalized feedback instrument for general practitioners (GPs) in the Netherlands, based on claims data. Its yearly uptake is limited. In order to improve the use and usability of the report this study aims to identify key criteria that GPs deem important for audit and feedback.Methods A qualitative interpretative approach was used. We interviewed 12 GPs about their use of the practice report. These interviews followed the Three-Step Test-Interview method. Thematic content analysis was used to investigate the perception of the GP on the report and their perspective on the usability. The interviews resulted in critical items for audit and feedback, on which all tables and graphs in the report were systematically assessed.Results From the interviews with GPs recurring criteria emerged that were identified as decisive for the effectiveness of performance feedback: content, reliability, validity and usability. The 34 tables and graphs of the PCPR were assessed using factual characteristics. Content analysis shows that the PCPR has a strong focus on costs. Assessment on validity indicates that casemix correction is always performed if relevant, but explanatory notes hardly clarify which (sub)population is measured (3/34), therefore GP’s in general have difficulty interpreting the results. Assessment on usability shows that, although benchmark figures are almost always presented based on national references, the formulation of goals or any specific attention for an action perspective is lacking because the perceived comparability with the patient group of the GP is limited.Conclusions The current PCPR does not meet key criteria for effective audit and feedback, as defined by GPs. It has a strong focus on costs instead of clinical behavior and is poorly understood when it comes to the specific population the data reflect. The results are in line with theoretical perspectives on learning and improvement of professionals. Improvement of the PCPR requires information on aspects of clinical behavior that are recognizable for GPs and they actually can influence. First steps are made to improve the method of case-mix correction in the benchmarking.
Background Audit and feedback informs healthcare providers and may affect professional practice and patient outcomes. The Primary Care Practice Report (PCPR) is a web based personalized feedback instrument for general practitioners (GPs) in the Netherlands, based on claims data. Its yearly uptake is limited. In order to improve the use and usability of the report this study aims to identify key criteria that GPs deem important for audit and feedback. Methods A qualitative interpretative approach was used. We interviewed 12 GPs about their use of the practice report. These interviews followed the Three-Step Test-Interview method. Thematic content analysis was used to investigate the perception of the GP on the report and their perspective on the usability. The interviews resulted in critical items for audit and feedback, on which all tables and graphs in the report were systematically assessed. Results From the interviews with GPs recurring criteria emerged that were identified as decisive for the effectiveness of performance feedback: content, reliability, validity and usability. The 34 tables and graphs of the PCPR were assessed using factual characteristics. Content analysis shows that the PCPR has a strong focus on costs. Assessment on validity indicates that casemix correction is always performed if relevant, but explanatory notes hardly clarify which (sub)population is measured (3/34), therefore GP’s in general have difficulty interpreting the results. Assessment on usability shows that, although benchmark figures are almost always presented based on national references, the formulation of goals or any specific attention for an action perspective is lacking because the perceived comparability with the patient group of the GP is limited. Conclusions The current PCPR does not meet key criteria for effective audit and feedback, as defined by GPs. It has a strong focus on costs instead of clinical behavior and is poorly understood when it comes to the specific population the data reflect. The results are in line with theoretical perspectives on learning and improvement of professionals. Improvement of the PCPR requires information on aspects of clinical behavior that are recognizable for GPs and they actually can influence. First steps are made to improve the method of case-mix correction in the benchmarking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.