BackgroundStrategies are central to the National Institutes of Health’s definition of implementation research as “the study of strategies to integrate evidence-based interventions into specific settings.” Multiple scholars have proposed lists of the strategies used in implementation research and practice, which they increasingly are classifying under the single term “implementation strategies.” We contend that classifying all strategies under a single term leads to confusion, impedes synthesis across studies, and limits advancement of the full range of strategies of importance to implementation. To address this concern, we offer a system for classifying implementation strategies that builds on Proctor and colleagues’ (2013) reporting guidelines, which recommend that authors not only name and define their implementation strategies but also specify who enacted the strategy (i.e., the actor) and the level and determinants that were targeted (i.e., the action targets).Main bodyWe build on Wandersman and colleagues’ Interactive Systems Framework to distinguish strategies based on whether they are enacted by actors functioning as part of a Delivery, Support, or Synthesis and Translation System. We build on Damschroder and colleague’s Consolidated Framework for Implementation Research to distinguish the levels that strategies target (intervention, inner setting, outer setting, individual, and process). We then draw on numerous resources to identify determinants, which are conceptualized as modifiable factors that prevent or enable the adoption and implementation of evidence-based interventions. Identifying actors and targets resulted in five conceptually distinct classes of implementation strategies: dissemination, implementation process, integration, capacity-building, and scale-up. In our descriptions of each class, we identify the level of the Interactive System Framework at which the strategy is enacted (actors), level and determinants targeted (action targets), and outcomes used to assess strategy effectiveness. We illustrate how each class would apply to efforts to improve colorectal cancer screening rates in Federally Qualified Health Centers.ConclusionsStructuring strategies into classes will aid reporting of implementation research findings, alignment of strategies with relevant theories, synthesis of findings across studies, and identification of potential gaps in current strategy listings. Organizing strategies into classes also will assist users in locating the strategies that best match their needs.
BackgroundTheories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories.MethodsWe identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results.ResultsTwo hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%).ConclusionsImplementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure. Variation in approaches to selecting theory warn against prescriptive guidance for theory selection. Instead, implementation scientists may benefit from considering the criteria that we propose in this paper and using them to justify their theory selection. Future research should seek to refine the criteria for theory selection to promote more consistent and appropriate use of theory in implementation science.
Scholars in diverse health-related disciplines and specialty fields of practice routinely promote qualitative research as an essential component of intervention and implementation programs of research and of a comprehensive evidence base for practice. Remarkably little attention, however, has been paid to the most important element of qualitative studies--the findings in reports of those studies--and specifically to enhancing the accessibility and utilization value of these findings for diverse audiences of users. The findings in reports of qualitative health research are too often difficult to understand and even to find owing to the way they are presented. A basic strategy for enhancing the presentation of these findings is to translate them into thematic statements, which can then in turn be translated into the language of intervention and implementation. Writers of qualitative health research reports might consider these strategies better to showcase the significance and actionability of findings to a wider audience.
BackgroundNumerous agencies are providing training, technical assistance, and other support to build community-based practitioners’ capacity to adopt and implement evidence-based prevention interventions. Yet, little is known about how best to design capacity-building interventions to optimize their effectiveness. Wandersman et al. (Am J Community Psychol.50:445–59, 2102) proposed the Evidence-Based System of Innovation Support (EBSIS) as a framework to guide research and thereby strengthen the evidence base for building practitioners’ capacity. The purpose of this review was to contribute to further development of the EBSIS by systematically reviewing empirical studies of capacity-building interventions to identify (1) the range of strategies used, (2) variations in the way they were structured, and (3) evidence for their effectiveness at increasing practitioners’ capacity to use evidence-based prevention interventions.MethodsPubMed, EMBASE, and CINAHL were searched for English-language articles reporting findings of empirical studies of capacity-building interventions that were published between January 2000 and January 2014 and were intended to increase use of evidence-based prevention interventions in non-clinical settings. To maximize review data, studies were not excluded a priori based on design or methodological quality. Using the EBSIS as a guide, two researchers independently extracted data from included studies. Vote counting and meta-summary methods were used to summarize findings.ResultsThe review included 42 publications reporting findings from 29 studies. In addition to confirming the strategies and structures described in the EBSIS, the review identified two new strategies and two variations in structure. Capacity-building interventions were found to be effective at increasing practitioners’ adoption (n = 10 of 12 studies) and implementation (n = 9 of 10 studies) of evidence-based interventions. Findings were mixed for interventions’ effects on practitioners’ capacity or intervention planning behaviors. Both the type and structure of capacity-building strategies may have influenced effectiveness. The review also identified contextual factors that may require variations in the ways capacity-building interventions are designed.ConclusionsBased on review findings, refinements are suggested to the EBSIS. The refined framework moves the field towards a more comprehensive and standardized approach to conceptualizing the types and structures of capacity-building strategies. This standardization will assist with synthesizing findings across studies and guide capacity-building practice and research.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-015-0272-7) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.