John Kania and Mark Kramer put forward “Collective Impact” in 2011 as a framework for organizing multi‐sector collaborative efforts to achieve change at scale. The collective impact theory of change posits that by establishing and implementing its five conditions, groups can achieve meaningful systems changes to create long‐term gains in social and environmental conditions. While significant scale uptake has occurred, questions have remained about the degree to which collective impact, as an approach, actually works to achieve change at scale. In 2017, ORS Impact and Spark Policy Institute embarked on an evaluation effort to understand the degree to which the collective impact approach contributed to population‐level change across many sites. We sought to answer this question with as much rigor as possible, without attempting to simplify the complexity of the context, the variability of implementation of collective impact, or the many interim changes needed to see the impact at scale. This chapter shares the essential methods our research team used. We do not seek to share the findings; instead, we hope that others can learn from and use these methods to continue to strengthen the sector's understanding of when, how, and why different collaborative efforts work or do not. In addition to describing the key methods, the authors will reflect on considerations, lessons learned, and recommendations to other evaluators who might seek to answer similar questions or use similar tools and methods.
This volume set out to document, illustrate, and critique the progress and innovation that has occurred during the advocacy evaluation field's first phase of development. This final chapter identifies how the context in which advocacy evaluation plays out is shifting. It describes how these shifts impact how advocates, advocacy funders, and advocacy evaluators think about what works, and what has value. Given this context, and lifting up the ideas of other chapter authors, the chapter concludes with a learning agenda for the field's next phase of development—four questions to help guide future field innovation and collective learning.
The development of a field of practice for policy advocacy evaluation has been relatively recent within the evaluation community. Given various unique elements, policy advocacy necessitates an adaptation of contextual understanding, methods, and use elements for evaluation practitioners. The authors of this article describe the individual elements that define the space and discuss the origins and development of advocacy evaluation within the evaluation community. The authors then review a broad (though not comprehensive) swath of the existing literature on the topic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.