Background Audit and feedback (A&F) is a common quality improvement strategy with highly variable effects on patient care. It is unclear how A&F effectiveness can be maximised. Since the core mechanism of action of A&F depends on drawing attention to a discrepancy between actual and desired performance, we aimed to understand current and best practices in the choice of performance comparator. Methods We described current choices for performance comparators by conducting a secondary review of randomised trials of A&F interventions and identifying the associated mechanisms that might have implications for effective A&F by reviewing theories and empirical studies from a recent qualitative evidence synthesis. Results We found across 146 trials that feedback recipients’ performance was most frequently compared against the performance of others (benchmarks; 60.3%). Other comparators included recipients’ own performance over time (trends; 9.6%) and target standards (explicit targets; 11.0%), and 13% of trials used a combination of these options. In studies featuring benchmarks, 42% compared against mean performance. Eight (5.5%) trials provided a rationale for using a specific comparator. We distilled mechanisms of each comparator from 12 behavioural theories, 5 randomised trials, and 42 qualitative A&F studies. Conclusion Clinical performance comparators in published literature were poorly informed by theory and did not explicitly account for mechanisms reported in qualitative studies. Based on our review, we argue that there is considerable opportunity to improve the design of performance comparators by (1) providing tailored comparisons rather than benchmarking everyone against the mean, (2) limiting the amount of comparators being displayed while providing more comparative information upon request to balance the feedback’s credibility and actionability, (3) providing performance trends but not trends alone, and (4) encouraging feedback recipients to set personal, explicit targets guided by relevant information. Electronic supplementary material The online version of this article (10.1186/s13012-019-0887-1) contains supplementary material, which is available to authorized users.
BackgroundThe program “Implementing Goals of Care Conversations with Veterans in VA LTC Settings” is proposed in partnership with the US Veterans Health Administration (VA) National Center for Ethics in Health Care and the Geriatrics and Extended Care Program Offices, together with the VA Office of Nursing Services. The three projects in this program are designed to support a new system-wide mandate requiring providers to conduct and systematically record conversations with veterans about their preferences for care, particularly life-sustaining treatments. These treatments include cardiac resuscitation, mechanical ventilation, and other forms of life support. However, veteran preferences for care go beyond whether or not they receive life-sustaining treatments to include issues such as whether or not they want to be hospitalized if they are acutely ill, and what kinds of comfort care they would like to receive.MethodsThree projects, all focused on improving the provision of veteran-centered care, are proposed. The projects will be conducted in Community Living Centers (VA-owned nursing homes) and VA Home-Based Primary Care programs in five regional networks in the Veterans Health Administration. In all the projects, we will use data from context and barrier and facilitator assessments to design feedback reports for staff to help them understand how well they are meeting the requirement to have conversations with veterans about their preferences and to document them appropriately. We will also use learning collaboratives—meetings in which staff teams come together and problem-solve issues they encounter in how to get veterans’ preferences expressed and documented, and acted on—to support action planning to improve performance.DiscussionWe will use data over time to track implementation success, measured as the proportions of veterans in Community Living Centers (CLCs) and Home-Based Primary Care (HBPC) who have a documented goals of care conversation soon after admission. We will work with our operational partners to spread approaches that work throughout the Veterans Health Administration.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-016-0497-0) contains supplementary material, which is available to authorized users.
BackgroundEvidence shows that clinical audit and feedback can significantly improve compliance with desired practice, but it is unclear when and how it is effective. Audit and feedback is likely to be more effective when feedback messages can influence barriers to behavior change, but barriers to change differ across individual health-care providers, stemming from differences in providers’ individual characteristics.DiscussionThe purpose of this article is to invite debate and direct research attention towards a novel audit and feedback component that could enable interventions to adapt to barriers to behavior change for individual health-care providers: computer-supported tailoring of feedback messages. We argue that, by leveraging available clinical data, theory-informed knowledge about behavior change, and the knowledge of clinical supervisors or peers who deliver feedback messages, a software application that supports feedback message tailoring could improve feedback message relevance for barriers to behavior change, thereby increasing the effectiveness of audit and feedback interventions. We describe a prototype system that supports the provision of tailored feedback messages by generating a menu of graphical and textual messages with associated descriptions of targeted barriers to behavior change. Supervisors could use the menu to select messages based on their awareness of each feedback recipient’s specific barriers to behavior change. We anticipate that such a system, if designed appropriately, could guide supervisors towards giving more effective feedback for health-care providers.SummaryA foundation of evidence and knowledge in related health research domains supports the development of feedback message tailoring systems for clinical audit and feedback. Creating and evaluating computer-supported feedback tailoring tools is a promising approach to improving the effectiveness of clinical audit and feedback.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-014-0203-z) contains supplementary material, which is available to authorized users.
Introduction Sub-optimal performance of healthcare providers in low-income countries is a critical and persistent global problem. The use of electronic health information technology (eHealth) in these settings is creating large-scale opportunities to automate performance measurement and provision of feedback to individual healthcare providers, to support clinical learning and behavior change. An electronic medical record system (EMR) deployed in 66 antiretroviral therapy clinics in Malawi collects data that supervisors use to provide quarterly, clinic-level performance feedback. Understanding barriers to provision of eHealth-based performance feedback for individual healthcare providers in this setting could present a relatively low-cost opportunity to significantly improve the quality of care. Objective The aims of this study were to identify and describe barriers to using EMR data for individualized audit and feedback for healthcare providers in Malawi and to consider how to design technology to overcome these barriers. Methods We conducted a qualitative study using interviews, observations, and informant feedback in eight public hospitals in Malawi where an EMR is used. We interviewed 32 healthcare providers and conducted seven hours of observation of system use. Results We identified four key barriers to the use of EMR data for clinical performance feedback: provider rotations, disruptions to care processes, user acceptance of eHealth, and performance indicator lifespan. Each of these factors varied across sites and affected the quality of EMR data that could be used for the purpose of generating performance feedback for individual healthcare providers. Conclusion Using routinely collected eHealth data to generate individualized performance feedback shows potential at large-scale for improving clinical performance in low-resource settings. However, technology used for this purpose must accommodate ongoing changes in barriers to eHealth data use. Understanding the clinical setting as a complex adaptive system (CAS) may enable designers of technology to effectively model change processes to mitigate these barriers.
Background The implementation of clinical decision support systems (CDSSs) as an intervention to foster clinical practice change is affected by many factors. Key factors include those associated with behavioral change and those associated with technology acceptance. However, the literature regarding these subjects is fragmented and originates from two traditionally separate disciplines: implementation science and technology acceptance. Objective Our objective is to propose an integrated framework that bridges the gap between the behavioral change and technology acceptance aspects of the implementation of CDSSs. Methods We employed an iterative process to map constructs from four contributing frameworks—the Theoretical Domains Framework (TDF); the Consolidated Framework for Implementation Research (CFIR); the Human, Organization, and Technology-fit framework (HOT-fit); and the Unified Theory of Acceptance and Use of Technology (UTAUT)—and the findings of 10 literature reviews, identified through a systematic review of reviews approach. Results The resulting framework comprises 22 domains: agreement with the decision algorithm; attitudes; behavioral regulation; beliefs about capabilities; beliefs about consequences; contingencies; demographic characteristics; effort expectancy; emotions; environmental context and resources; goals; intentions; intervention characteristics; knowledge; memory, attention, and decision processes; patient–health professional relationship; patient’s preferences; performance expectancy; role and identity; skills, ability, and competence; social influences; and system quality. We demonstrate the use of the framework providing examples from two research projects. Conclusions We proposed BEAR (BEhavior and Acceptance fRamework), an integrated framework that bridges the gap between behavioral change and technology acceptance, thereby widening the view established by current models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.