Abstract:The authors applied contemporary methods from the evaluation literature to measure implementation in a residential treatment program for adolescent substance abuse. A logic model containing two main components was measured. Program structure (adherence to the intended framework of service delivery) was measured using data from daily activity logs completed by program staff. Treatment process, conceptualized as therapeutic milieu, was measured using an adapted version of a scale used to measure implementation in therapeutic communities. In addition, variability in implementation was measured using statistical process control (SPC) procedures. Adolescents completed, on average, 50% of the weekly prescribed services. The milieu of the program was rated by the adolescents as highly therapeutic. Moreover, preliminary psychometrics suggest therapeutic milieu can be measured reliably in adolescents. These two main variables were implemented with consistency across adolescents. These findings are discussed along with implications for evaluation work in similar fields.
Keywords
Why Evaluate Implementation?Recent calls for accountability and better understanding of intervention processes (Dane & Schneider, 1998;Gresham, Gansle, Noell, Cohen, & Rosenblum, 1993;Moncher & Prinz, 1991;Waltz, Addis, Koerner, & Jacobson, 1993) have been answered by a growing emphasis on implementation evaluation (Dusenbury, Brannigan, Falco, & Hansen, 2003;Hogue et al., 1998;Orwin, 2000;Schlosser, 2002). This has included much theoretical literature promoting increasingly sophisticated methods for this implementation of evaluation and progress toward routine inclusion of implementation and process evaluation procedures in reports on treatment and services outcomes (e.g., Dewa, Horgan, Russell, & Keates, 2001;Kam, Greenberg, & Walls, 2003). Evaluation or research that neglects the issue of implementation has been described as no longer acceptable (Mowbray, Holter, Teague, & Bybee, 2003).At the least, implementation evaluation is critical for documenting program integrity, defined as the degree to which a service is delivered as intended by the program theory (Summerfelt, 2003). Beyond this, two of the most compelling reasons for evaluators to measure implementation are (a) to monitor program activities in order to identify problems in program implementation and (b) to measure variability in program delivery for later use in statistical analysis of program impact (Scheirer, 1994). The current study aimed to apply newly emerging methods from the evaluation literature to measure implementation of a residential treatment program for adolescents with substance abuse problems. The study highlights some of the lessons learned from applying up-to-date methodology and resulting implications for future implementation studies, including those that go on to evaluate outcomes in relation to implementation. The study also advances the framework of implementation evaluation methodology for residential treatment in particular.
How Implementation Is Ty...