Leading evaluation practitioners were asked about lessons from the recent 26th Conference of the Parties (COP26) for evaluation practice. Contributors emphasize the importance of evaluating equity between rich and poor countries and other forms of climate injustice. The role of the evaluation is questioned: what can evaluation be expected to do on its own and what requires collaboration across disciplines, professions and civil society – and across generations? Contributors discuss the implications of the post-Glasgow climate ‘pact’ for the continued relevance of evaluation. Should evaluators advocate for the marginalized and become activists on behalf of sustainability and climate justice – as well as advocates of evidence? Accountability-driven and evidence-based evaluation is needed to assess the effectiveness of investments in adaptation and mitigation. Causal pathways in different settings and ‘theories of no-change’ are needed to understand gaps between stakeholder promises and delivery. Evaluators should measure unintended consequences and what is often left unmeasured, and be sensitive to failure and unanticipated effects of funded actions. Evaluation timescales and units of analysis beyond particular programmes are needed to evaluate the complexities of climate change, sustainability and to take account of natural systems. The implications for evaluation commissioning and funding are discussed as well as the role of evaluation in programme-design and implementation.
Traditional monitoring, evaluation, and learning (MEL) approaches, methods, and tools no longer reflect the dynamic complexity of the severe (or “super-wicked”) problems that define the Anthropocene: climate change, environmental degradation, and global pandemics. In late 2019, the Adaptation Fund’s Technical Evaluation Reference Group (AF-TERG) commissioned a study to identify and assess innovative MEL approaches, methods, and technologies to better support and enable climate change adaptation (CCA) and to inform the Fund’s own approach to MEL. This chapter presents key findings from the study, with seven recommendations to support a systems innovation approach to CCA: Promote and lead with a CCA systems innovation approach, engaging with key concepts of complex systems, super-wicked problems, the Anthropocene, and socioecological systems. Engage better with participation, inclusivity, and voice in MEL. Overcome risk aversion in CCA and CCA MEL through field testing new, innovative, and often more risky MEL approaches. Demonstrate and promote using MEL to support and integrate adaptive management. Work across socioecological systems and scales. Advance MEL approaches to better support systematic evidence and learning for scaling and replicability. Adapt or develop MEL approaches, methods, and tools tailored to CCA systems innovation.
Evaluability assessments (EAs) have differing definitions, focus on various aspects of evaluation, and have been implemented inconsistently in the last several decades. Climate change adaptation (CCA) programming presents particular challenges for evaluation given shifting baselines, variable time horizons, adaptation as a moving target, and uncertainty inherent to climate change and its extreme and varied effects. The Adaptation Fund Technical Evaluation Reference Group (AF-TERG) developed a framework to assess the extent to which the Fund’s portfolio of projects has in place structures, processes, and resources capable of supporting credible and useful monitoring, evaluation, and learning (MEL). The framework was applied on the entire project portfolio to determine the level of evaluability and make recommendations for improvement. This chapter explores the assessment’s findings on designing programs and projects to help minimize the essential challenges in the field. It discusses how the process of EA can help identify opportunities for strengthening both evaluability and a project’s MEL more broadly. A key conclusion was that the strength and quality of a project’s overall approach to MEL is a major determinant of a project’s evaluability. Although the framework was used retroactively, EAs could also be used prospectively as quality assurance tools at the pre-implementation stage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.