The Philadelphia Zoo's Measuring Mission project assessed the conservation-related impacts of a visit to the Zoo and documented the results in a way that would provide a set of easily actionable planning strategies. A logic model provided a theoretical framework and guided the development of survey items. Three groups were surveyed using a pre-post retrospective instrument: zoo visitors, members, and volunteers. This report includes findings from the visitor surveys only. Data were analyzed using factor analysis, correlations, and t-tests. Results revealed that the Philadelphia Zoo has been most successful in providing its guests with a satisfying animal viewing experience, facilitated by accessible informative interpretive staff, but that guests do not always take advantage of opportunities to interact with staff. Success in achieving the Zoo's conservation mission was measured by comparing pre and posttest scores on five outcome factors (conservation motivation, conservation knowledge, pro-conservation consumer skills, conservation attitudes/values, readiness to take conservation action). The greatest gains were found in conservation knowledge and conservation motivation. Quality of exhibits and quality of staff stand out as the most important factors in influencing conservation outcomes. To ensure that results would be accessible to a wide variety of Zoo employees for planning, program and exhibit development, and staff training, nine strategies were identified as key to achieving success in the Zoo's mission. Measuring Mission has created a process for assessing the Zoo's mission impact, and has confirmed that high-quality exhibits interpreted by expert, readily available staff can influence conservation knowledge and motivation in particular.
This volume set out to document, illustrate, and critique the progress and innovation that has occurred during the advocacy evaluation field's first phase of development. This final chapter identifies how the context in which advocacy evaluation plays out is shifting. It describes how these shifts impact how advocates, advocacy funders, and advocacy evaluators think about what works, and what has value. Given this context, and lifting up the ideas of other chapter authors, the chapter concludes with a learning agenda for the field's next phase of development—four questions to help guide future field innovation and collective learning.
The development of a field of practice for policy advocacy evaluation has been relatively recent within the evaluation community. Given various unique elements, policy advocacy necessitates an adaptation of contextual understanding, methods, and use elements for evaluation practitioners. The authors of this article describe the individual elements that define the space and discuss the origins and development of advocacy evaluation within the evaluation community. The authors then review a broad (though not comprehensive) swath of the existing literature on the topic.
TCC Group conducted confidential interviews with a small number of staff at a subset of the 54 foundations that participated in TCC Group's Foundation Core Capacity Assessment Tool, to gain their perspective on lessons learned from the process. remarked, "There can be a mindset among foundations that focusing on our own capacity may diminish our ability to be mission driven." 1 Others may see addressing their own capacity needs as a luxury. Another foundation official interviewed for this article noted that in the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.