Background National clinical audit programmes aim to improve patient care by reviewing performance against explicit standards and directing action towards areas not meeting those standards. Their impact can be improved by (1) optimising feedback content and format, (2) strengthening audit cycles and (3) embedding randomised trials evaluating different ways of delivering feedback. Objectives The objectives were to (1) develop and evaluate the effects of modifications to feedback on recipient responses, (2) identify ways of strengthening feedback cycles for two national audits and (3) explore opportunities, costs and benefits of national audit participation in a programme of trials. Design An online fractional factorial screening experiment (objective 1) and qualitative interviews (objectives 2 and 3). Setting and participants Participants were clinicians and managers involved in five national clinical audits – the National Comparative Audit of Blood Transfusions, the Paediatric Intensive Care Audit Network, the Myocardial Ischaemia National Audit Project, the Trauma Audit & Research Network and the National Diabetes Audit – (objective 1); and clinicians, members of the public and researchers (objectives 2 and 3). Interventions We selected and developed six online feedback modifications through three rounds of user testing. We randomised participants to one of 32 combinations of the following recommended specific actions: comparators reinforcing desired behaviour change; multimodal feedback; minimised extraneous cognitive load for feedback recipients; short, actionable messages followed by optional detail; and incorporating ‘the patient voice’ (objective 1). Main outcome measures The outcomes were intended actions, including enactment of audit standards (primary outcome), comprehension, user experience and engagement (objective 1). Results For objective 1, the primary analysis included 638 randomised participants, of whom 566 completed the outcome questionnaire. No modification independently increased intended enactment of audit standards. Minimised cognitive load improved comprehension (+0.1; p = 0.014) and plans to bring audit findings to colleagues’ attention (+0.13, on a –3 to +3 scale; p = 0.016). We observed important cumulative synergistic and antagonistic interactions between modifications, participant role and national audit. The analysis in objective 2 included 19 interviews assessing the Trauma Audit Research Network and the National Diabetes Audit. The identified ways of strengthening audit cycles included making performance data easier to understand and guiding action planning. The analysis in objective 3 identified four conditions for effective collaboration from 31 interviews: compromise – recognising capacity and constraints; logistics – enabling data sharing, audit quality and funding; leadership – engaging local stakeholders; and relationships – agreeing shared priorities and needs. The perceived benefits of collaboration outweighed the risks. Limitations The online experiment assessed intended enactment as a predictor of actual clinical behaviour. Interviews and surveys were subject to social desirability bias. Conclusions National audit impacts may be enhanced by strengthening all aspects of feedback cycles, particularly effective feedback, and considering how different ways of reinforcing feedback act together. Future work Embedded randomised trials evaluating different ways of delivering feedback within national clinical audits are acceptable and may offer efficient, evidence-based and cumulative improvements in outcomes. Trial registration This trial is registered as ISRCTN41584028. Funding details This project was funded by the National Institute for Health and Care Research (NIHR) Health and Social Care Delivery Research programme and will be published in full in Health and Social Care Delivery Research; Vol. 10, No. 15. See the NIHR Journals Library website for further project information.
Background Audit and feedback aims to improve patient care by comparing healthcare performance against explicit standards. It is used to monitor and improve patient care, including through National Clinical Audit (NCA) programmes in the UK. Variability in effectiveness of audit and feedback is attributed to intervention design; separate randomised trials to address multiple questions about how to optimise effectiveness would be inefficient. We evaluated different feedback modifications to identify leading candidates for further “real-world” evaluation. Methods Using an online fractional factorial screening experiment, we randomised recipients of feedback from five UK NCAs to different combinations of six feedback modifications applied within an audit report excerpt: use effective comparators, provide multimodal feedback, recommend specific actions, provide optional detail, incorporate the patient voice, and minimise cognitive load. Outcomes, assessed immediately after exposure to the online modifications, included intention to enact audit standards (primary outcome, ranked on a scale of −3 to +3, tailored to the NCA), comprehension, user experience, and engagement. Results We randomised 1241 participants (clinicians, managers, and audit staff) between April and October 2019. Inappropriate repeated participant completion occurred; we conservatively excluded participant entries during the relevant period, leaving a primary analysis population of 638 (51.4%) participants. None of the six feedback modifications had an independent effect on intention across the five NCAs. We observed both synergistic and antagonistic effects across outcomes when modifications were combined; the specific NCA and whether recipients had a clinical role had dominant influences on outcome, and there was an antagonistic interaction between multimodal feedback and optional detail. Among clinical participants, predicted intention ranged from 1.22 (95% confidence interval 0.72, 1.72) for the least effective combination in which multimodal feedback, optional detail, and reduced cognitive load were applied within the audit report, up to 2.40 (95% CI 1.88, 2.93) for the most effective combination including multimodal feedback, specific actions, patient voice, and reduced cognitive load. Conclusion Potentially important synergistic and antagonistic effects were identified across combinations of feedback modifications, audit programmes, and recipients, suggesting that feedback designers must explicitly consider how different features of feedback may interact to achieve (or undermine) the desired effects. Trial registration International Standard Randomised Controlled Trial Number: ISRCTN41584028
Background Online studies offer an efficient method of recruiting participants and collecting data. Whilst delivering an online randomised trial, we detected unusual recruitment activity. We describe our approach to detecting and managing suspected fraud and share lessons for researchers. Methods Our trial investigated the single and combined effects of different ways of presenting clinical audit and feedback. Clinicians and managers who received feedback from one of five United Kingdom national clinical audit programmes were emailed invitations that contained a link to the trial website. After providing consent and selecting their relevant audit, participants were randomised automatically to different feedback versions. Immediately after viewing their assigned feedback, participants completed a questionnaire and could request a financial voucher by entering an email address. Email addresses were not linked to trial data to preserve participant anonymity. We actively monitored participant numbers, questionnaire completions, and voucher claims. Results Following a rapid increase in trial participation, we identified 268 new voucher claims from three email addresses that we had reason to believe were linked. Further scrutiny revealed duplicate trial completions and voucher requests from 24 email addresses. We immediately suspended the trial, improved security measures, and went on to successfully complete the study. We found a peak in questionnaires completed in less than 20 seconds during a likely contamination period. Given that study and personal data were not linked, we could not directly identify the trial data from the 268 duplicate entries within the 603 randomisations occurring during the same period. We therefore excluded all 603 randomisations from the primary analysis, which was consequently based on 638 randomisations. A sensitivity analysis, including all 961 randomisations over the entire study except for questionnaire completions of less than 20 seconds, found only minor differences from the primary analysis. Conclusion Online studies offering incentives for participation are at risk of attempted fraud. Systematic monitoring and analysis can help detect such activity. Measures to protect study integrity include linking participant identifiers to study data, balancing study security and ease of participation, and safeguarding the allocation of participant incentives. Trial registration International Standard Randomised Controlled Trial Number: ISRCTN41584028. Registration date is August 17, 2017.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.