BackgroundPlan–do–study–act (PDSA) cycles provide a structure for iterative testing of changes to improve quality of systems. The method is widely accepted in healthcare improvement; however there is little overarching evaluation of how the method is applied. This paper proposes a theoretical framework for assessing the quality of application of PDSA cycles and explores the consistency with which the method has been applied in peer-reviewed literature against this framework.MethodsNHS Evidence and Cochrane databases were searched by three independent reviewers. Empirical studies were included that reported application of the PDSA method in healthcare. Application of PDSA cycles was assessed against key features of the method, including documentation characteristics, use of iterative cycles, prediction-based testing of change, initial small-scale testing and use of data over time.Results73 of 409 individual articles identified met the inclusion criteria. Of the 73 articles, 47 documented PDSA cycles in sufficient detail for full analysis against the whole framework. Many of these studies reported application of the PDSA method that failed to accord with primary features of the method. Less than 20% (14/73) fully documented the application of a sequence of iterative cycles. Furthermore, a lack of adherence to the notion of small-scale change is apparent and only 15% (7/47) reported the use of quantitative data at monthly or more frequent data intervals to inform progression of cycles.DiscussionTo progress the development of the science of improvement, a greater understanding of the use of improvement methods, including PDSA, is essential to draw reliable conclusions about their effectiveness. This would be supported by the development of systematic and rigorous standards for the application and reporting of PDSAs.
The implementation of evidence-based treatments to deliver high-quality care is essential to meet the healthcare demands of aging populations. However, the sustainable application of recommended practice is difficult to achieve and variable outcomes well recognised. The NHS Institute for Innovation and Improvement Sustainability Model (SM) was designed to help healthcare teams recognise determinants of sustainability and take action to embed new practice in routine care. This article describes a formative evaluation of the application of the SM by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care for Northwest London (CLAHRC NWL).Data from project teams’ responses to the SM and formal reviews was used to assess acceptability of the SM and the extent to which it prompted teams to take action. Projects were classified as ‘engaged,’ ‘partially engaged’ and ‘non-engaged.’ Quarterly survey feedback data was used to explore reasons for variation in engagement. Score patterns were compared against formal review data and a ‘diversity of opinion’ measure was derived to assess response variance over time.Of the 19 teams, six were categorized as ‘engaged,’ six ‘partially engaged,’ and seven as ‘non-engaged.’ Twelve teams found the model acceptable to some extent. Diversity of opinion reduced over time. A minority of teams used the SM consistently to take action to promote sustainability but for the majority SM use was sporadic. Feedback from some team members indicates difficulty in understanding and applying the model and negative views regarding its usefulness.The SM is an important attempt to enable teams to systematically consider determinants of sustainability, provide timely data to assess progress, and prompt action to create conditions for sustained practice. Tools such as these need to be tested in healthcare settings to assess strengths and weaknesses and findings disseminated to aid development. This study indicates the SM provides a potentially useful approach to measuring teams’ views on the likelihood of sustainability and prompting action. Securing engagement of teams with the SM was challenging and redesign of elements may need to be considered. Capacity building and facilitation appears necessary for teams to effectively deploy the SM.
Background The identification and articulation of programme theory can support effective design, execution and evaluation of quality improvement (QI) initiatives. Programme theory includes an agreed aim, potential interventions to achieve this aim, anticipated cause/effect relationships between the interventions and the aim and measures to monitor improvement. This paper outlines the approach used in a research and improvement programme to support QI initiatives in identifying and articulating programme theory: the action effect method. Background to method development Building on a previously used QI method, the driver diagram, the action effect method was developed using co-design and iteration over four annual rounds of improvement initiatives. This resulted in a specification of the elements required to fully articulate the programme theory of a QI initiative. The action effect method The action effect method is a systematic and structured process to identify and articulate a QI initiative's programme theory. The method connects potential interventions and implementation activities with an overall improvement aim through a diagrammatic representation of hypothesised and evidenced cause/effect relationships. Measure concepts, in terms of service delivery and patient and system outcomes, are identified to support evaluation. Discussion and conclusions The action effect method provides a framework to guide the execution and evaluation of a QI initiative, a focal point for other QI methods and a communication tool to engage stakeholders. A clear definition of what constitutes a wellarticulated programme theory is provided to guide the use of the method and assessment of the fidelity of its application.
Previously, we showed that 3% (31/1032)of asymptomatic healthcare workers (HCWs) from a large teaching hospital in Cambridge, UK, tested positive for SARS-CoV-2 in April 2020. About 15% (26/169) HCWs with symptoms of coronavirus disease 2019 (COVID-19) also tested positive for SARS-CoV-2 (Rivett et al., 2020). Here, we show that the proportion of both asymptomatic and symptomatic HCWs testing positive for SARS-CoV-2 rapidly declined to near-zero between 25th April and 24th May 2020, corresponding to a decline in patient admissions with COVID-19 during the ongoing UK ‘lockdown’. These data demonstrate how infection prevention and control measures including staff testing may help prevent hospitals from becoming independent ‘hubs’ of SARS-CoV-2 transmission, and illustrate how, with appropriate precautions, organizations in other sectors may be able to resume on-site work safely.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.