Social innovation has gained prominence as a way to address social problems and needs. Evaluators and social innovators are conceptualizing and implementing evaluation approaches for social innovation contexts; however, no systematic effort has yet been made to explore and assess the overlap between evaluation and social innovation based on the empirical knowledge base. We address this gap, drawing on 28 empirical studies of evaluation in social innovation contexts to describe what evaluation practices look like, what drives those practices, and how they affect social innovations. Findings indicate most had developmental purposes, emphasized collaborative approaches, and used multiple methods. Prominent drivers were a complexity perspective, a learning-oriented focus, and the need for responsiveness. Reported influences on social innovations included advancing strategies, improving delivery, balancing aggregate and local information needs, and reducing risk. Conflict resolution, the quality of relationships, and availability of time and capacity mediated these influences. More peer-reviewed empirical studies and a broader range of study designs are needed, including research on how evaluations influence social innovation processes over time, phases, space and scale.
Social innovations (SIs) frequently bring previously unrelated actors, ideas, and practices together in new configurations with the goal of addressing social needs. However, the dizzying variety of definitions of SI and their dynamic, exploratory character raise dilemmas for evaluators tasked with their evaluations. This article is based on a systematic review of research on evaluation, specifically an analysis of 28 published peer-reviewed empirical studies, within SI contexts. Given that design considerations are becoming increasingly important to evaluators as the complexity of social interventions grows, our objectives were to identify influences on design of evaluations of SI and clarify, which SI features should be taken into account when designing evaluations. We ultimately developed a conceptual framework to aid evaluators in recognizing some differences between SI and conventional social interventions, and correspondingly, implications for evaluation design. This framework is discussed in terms of its implications for ongoing research and practice.
The basic ideas behind contribution analysis were set out in 2001. Since then, interest in the approach has grown and contribution analysis has been opera tionalized in different ways. In addition, several reviews of the approach have been published and raise a few concerns. In this article, I clarify several of the key concepts behind contribution analysis, including contributory causes and contribution claims. I discuss the need for reasonably robust theories of change and the use of nested theories of change to unpack complex settings. On contribution claims, I argue the need for causal narratives to arrive at credible claims, the limited role that external causal factors play in arriving at contribution claims, the use of robust theories of change to avoid bias, and the fact that opinions of stakeholders on the contribution made are not central in arriving at contribution claims.
This article is a review and integration of evaluation utilization literature with a new focus on the use of technology to increase evaluation utility. Scholarship on evaluation utilization embodies one of the major and ongoing quandaries in the evaluation profession: What constitutes usefulness and relevance to stakeholders? We think that a constructivist lens is helpful in making sense of the trajectory this literature has taken, where what is “useful” and what culminates in “use” have become much more flexible notions that are in a constant state of negotiation between evaluators and evaluation stakeholders. We posit that it may be important for evaluators who are closely engaged with stakeholders to pay greater attention to this interactivity to build a common vision of what is “useful” at that moment in time. While this is no small task, we posit that evaluators may have something to gain by exploring the wealth of digital technologies and social media tools that are available. The use of these tools in local level, participatory-oriented contexts may be valuable for encouraging interactivity, potentially encouraging learning, creativity, and ownership. This article aims to stress that integrating technology into everyday evaluation practice, where possible, may ultimately enhance evaluation usefulness and relevance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.