In recent years, the demand for evidence-based teen pregnancy prevention programs has increased, but practitioners often struggle to replicate and implement them as designed in real-world community settings. The purpose of this article is to describe the barriers and facilitators encountered during pilot year attempts to implement an evidence-based teen pregnancy prevention program within three types of organizations: (1) small community-based organizations; (2) a school-based organization; and (3) a large decentralized city-sponsored summer youth program. We frame our discussion of these experiences within the context of a systemic, multilevel framework for implementation consisting of (1) core implementation components; (2) organizational components; and (3) external factors. This article explores the organizational and external implementation factors we experienced during the implementation process, describes our lessons learned throughout this process, and offers strategies for other practitioners to proactively address these factors from the start of program planning. These findings may provide useful insight for other organizations looking to implement multi-session, group-level interventions with fidelity.
Although previous evidence has indicated that this particular program can be effective, this study's findings indicate that it was not effective in this setting with this specific population. Results should provide an incentive to learn why the intervention works in some cases and what conditions are necessary for causal impacts.
Despite the increase in federal tiered evidence initiatives that require the use of rigorous evaluation designs, such as randomized experiments, there has been limited guidance in the evaluation literature on practical strategies to implement such studies successfully. This paper provides lessons learned in executing experiments in applied settings, such as schools, juvenile justice agencies, mental health clinics, reproductive health clinics, and job centers. To promote successful study implementation, evaluators must understand study roles, select suitable partners, and employ appropriate partnership models. In this article, we describe partner types and study roles, concretely illustrate how partnerships might be structured, discuss specific strategies to assess implementation partner capacity and maintain partner engagement, and consider how an evaluator can leverage the skills and resources of study partners to improve participant recruitment, enrollment, and retention. While the lessons are drawn from our experiences conducting 10 experimental studies, some may also apply to non-experimental evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.