This article describes the development of the national evaluation system in South Africa, which has been implemented since 2012, led by the Department of Planning, Monitoring and Evaluation (DPME, previously the Department of Performance Monitoring and Evaluation) in the Presidency. It suggests emerging results but an evaluation of the evaluation being carried out in 2015 will address this formally. Responding to dissatisfaction with government services, in 2009 the government placed a major emphasis on monitoring and evaluation (M&E). A ministry and department were created, initially focusing on monitoring but in 2011 developing a national evaluation policy framework, which has been rolled out from 2012. The system has focused on improving performance, as well as improved accountability. Evaluations are proposed by national government departments and selected for a national evaluation plan. The relevant department implements the evaluations with the DPME and findings go to Cabinet and are made public. So far 39 evaluations have been completed or are underway, covering around R50 billion (approximately $5 billion) of government expenditure over a three-year expenditure framework. There is evidence that the first evaluations to be completed are having significant influence on the programmes concerned. The big challenge facing South Africa is to increase capacity of service providers and government staff so as to be able to have more and better quality evaluations taking place outside of as well as through the DPME.
Background: African countries are developing their monitoring and evaluation policies to systematise, structure and institutionalise evaluations and use of evaluative evidence across the government sector. The pace at which evaluations are institutionalised and systematised across African governments is progressing relatively slowly.Aims and objectives: This article offers a comparative analysis of Africa’s national evaluation policy landscape. The article looks at the policies of Zimbabwe, South Africa, Nigeria, Kenya (not adopted) and Uganda. To achieve the aim we unpack the different characteristics taken by the national evaluation policies, emerging lessons for countries who wish to develop a national evaluation policy, and key challenges faced by countries with regard to evaluation policy development and implementation. The article draws on both a desktop review and action research approaches from the Centre for Learning on Evaluation and Results Anglophone Africa to build national evaluation systems across the region. The approach has included peer learning and co-creation of knowledge around public sector evaluation systems.Key conclusions: The national evaluation policies reviewed share certain common features in terms of purpose and composition. They are also struggling with common issues of institutionalising the evaluation system across the public sector. However, there are variations in the countries’ guiding governance frameworks at a national level that shape the nature and content of policies, as well as the ways through which the policies themselves are expected to guide the use of evaluative evidence for decision and policymaking, and programming.<br />Key messages<br /><ul><li>Peer-to-peer learning is important for sharing experiences on developing national evaluation policy.</li><br /><li>Countries should develop their policies in line with their state architecture, context and relevance to their needs.</li><br /><li>Policies necessitate new ways of thinking about the practice of monitoring and evaluation.</li><br /><li>This article fills an important empirical lacuna on evidence use and policy development in Africa</li></ul>
vidence for policy-informed decision-making, budgeting and programming. National evaluation systems (NESs) are being set up across Africa, together with the processes and other monitoring and evaluation (ME) infrastructure for efficient and effective functioning.Objectives: This article seeks to document comparative developments in the growth of systems in Anglophone African countries, and provide an understanding of these systems for capacity-development interventions in these countries. It also aims to contribute to the public debate on the development of national ME systems, institutionalisation of evaluation, and use of ME evidence in the larger African context.Methods: This article uses four key dimensions as the conceptual framework of a national monitoring and evaluation system, including monitoring and evaluation systems in the executive; the functioning of parliamentary ME systems; professionalisation of evaluation and existence of an enabling environment. A questionnaire was used to collect information based on the key dimensions from government and non-governmental personnel. The Mo Ibrahim index of 2018 was used to collect information on enabling environment.Results: Findings indicate that all systems have stakeholders with different roles and contexts and are designed according to the state architecture, prevailing resources and capacities.Conclusions: This article concludes that the findings can be used as different entry points for developing and strengthening ME capacities in countries studied.
There is a growing recognition of the complex relationship between evaluation and research, and policy and practice. Policy making is inherently political, and public administration is contingent on various factors, that is budgets, capabilities and systems other than evidence. This has evolved in the Department of Planning, Monitoring and Evaluations (DPME) in South Africa challenging conventional ideas of communication between evaluators and policymakers and practitioners. These are characterised by monologues from evaluators to policymakers and practitioners, which are reserved exclusively for communicating the finished product. This article is a reflection of the emerging work of the DPME valuations which is investigating the relational dynamics between evaluators and programme personnel, and encouraging more interactive and diversified communication throughout the evaluation process. The article offers a public sector observation. The lessons and implications can be useful, firstly to other countries establishing evaluation systems, and also those who have an interest in enhancing the use of evidence by government agencies in developing countries.
Background: This article shares lessons from four case studies, documenting experiences of evidence use in different public policies in South Africa, Kenya, Ghana and the Economic Community of West African States (ECOWAS).Objectives: Most literature on evidence use in Africa focuses either on one form of evidence, that is, evaluations, systematic reviews or on the systems governments develop to support evidence use. However, the use of evidence in policy is complex and requires systems, processes, tools and information to flow between different stakeholders. In this article, we demonstrate how relationships between knowledge generators and users were built and maintained in the case studies, and how these relationships were critical for evidence use.Method: The case studies were amongst eight case studies carried out for the book entitled ‘Using Evidence in Policy and Practice: Lessons from Africa’. Ethnographic case studies drawn from both secondary and primary research, including interviews with key informants and extensive document reviews, were carried out. The research and writing process involved policymakers enabling the research to access participants’ rich observations.Results: The case studies demonstrate that initiatives to build relationships between different state agencies, between state and non-state actors and between non-state actors are critical to enable organisations to use evidence. This can be enabled by the creation of spaces for dialogue that are sensitively facilitated and ongoing for actors to be aware of evidence, understand the evidence and be motivated to use the evidence.Conclusion: Mutually beneficial and trustful relationships between individuals and institutions in different sectors are conduits through which information flows between sectors, new insights are generated and evidence used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.