This paper reviews empirical research on the use of evaluation from 1986 to 2005 using Cousins and Leithwood’s 1986 framework for categorizing empirical studies of evaluation use conducted since that time. The literature review located 41 empirical studies of evaluation use conducted between 1986 and 2005 that met minimum quality standards. The Cousins and Leithwood framework allowed a comparison over time. After initially grouping these studies according to Cousins and Leithwood’s two categories and twelve characteristics, one additional category and one new characteristic were added to their framework. The new category is stakeholder involvement, and the new characteristic is evaluator competence (under the category of evaluation implementation). Findings point to the importance of stakeholder involvement in facilitating evaluation use and suggest that engagement, interaction, and communication between evaluation clients and evaluators is critical to the meaningful use of evaluations.
Evaluation researchers and practitioners acknowledge that involving stakeholders in the planning and implementation of an evaluation increases buy-in, understanding, and use. With the recent increase in multi-site evaluations of large federal programs, evaluators must think differently about how to encourage meaningful collaboration by stakeholders. To date, there has been no published measure of such involvement, despite recent calls for more systematic, replicable research. The purpose of this study was to validate the Evaluation Involvement Scale for use in multi-site evaluations. Between the fall of 2006 and spring of 2007 data were collected from an electronic survey and phone interviews of evaluators and principal investigators of four National Science Foundation program evaluations. Using Messick's unitary concept of validity as a framework, theoretical, statistical, and rational evidence is provided to support the use of the Evaluation Involvement Scale to measure stakeholder involvement in multi-site evaluations.
Decision makers in nonformal education programs can maximize the utility of their evaluation investment and improve program effectiveness by being more mindful of the potential uses of evaluation information. Evaluation use is a multifaceted construct that may include, but is not limited to, the implementation of evaluation recommendations or results. Three primary types of evaluation use have emerged from four decades of scholarly research: (1) instrumental, where the results are used in making decisions about program structure and function; (2) conceptual, where the results inform or educate decision makers about matters related to the program or topic being evaluated; and (3) persuasive or symbolic, where the results are used to influence or persuade others. This chapter expands on these definitions of evaluation use, describes some of the challenges presented by nonformal education settings, and discusses strategies for increasing evaluation use in nonformal education programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.