Purpose
– The purpose of this paper is to demonstrate that both the processes and outcomes of advocacy can be evaluated in ways that can help with learning and accountability. The paper reviews the literature on evaluating advocacy, with a particular focus on development work, and describes an example of the systematic evaluation of the Business Environment Strengthening in Tanzania-Advocacy Component business advocacy programme in Tanzania.
Design/methodology/approach
– The evaluation uses a Scientific Realist methodology to give a disaggregated, contextual analysis of advocacy, asking the typically Scientific Realist question: “What works for whom in what circumstances?” Complementary methods are being applied longitudinally over a five-year period and include stakeholder interviews, business surveys, diagnostic tools and learning seminars.
Findings
– The paper argues that advocacy evaluation is no more complex or difficult than other aspects of development. Rigorous, cost-effective methods can be developed, so long as clear conceptualisation is carried out as an initial step. Systematic analysis of influencing tactics and capacity building demonstrates the relative skill of the advocacy organisations and allows the funder to see intermediate indicators of progress which are otherwise invisible.
Practical implications
– Consistent conceptualisation and measurement allow comparison over time, and between different types of projects and organisations. Integrating methods with the operation of campaigns or programmes allows the evaluator to give feedback in real time and minimise the burden on evaluands.
Originality/value
– The paper is based on original research/evaluation. The field is heavily concentrated on social change. The paper makes a contribution by providing an example of advocacy evaluation in the field of business advocacy and economic development. In addition, the example extends the field of advocacy evaluation by considering the systematic evaluation of a whole programme of individual advocacy projects.
The author uses an evaluator's perspective and practical experience of implementing performance measurement systems in the public, private and voluntary sectors to devise ten principles of good practice in performance measurement: conceptualisation, stakeholder approach, clarity, balance, ownership, usefulness, accuracy, contextualisation, dynamism and value for money. These principles are not easily achieved and require adjustments that are difficult to implement in top down performance measurement systems. Locally determined performance measurement systems have a higher chance of meeting learning-based and practical purposes of performance measurement as opposed to political and symbolic purposes. Incorporating an evaluation element can moderate some of the potential weaknesses of a top down approach and take advantage of the scale of a national system to provide a powerful contextualised learning repository.
This paper describes the issues, processes and difficulties encountered and solutions proposed when introducing an evaluation system into a large and complex voluntary organisation providing counselling and relationship support services. The organisation Relate has 2,000 counsellors in over 600 locations operated by a network of 81 Relate Centres, and over 140,000 people use its services annually. The present project arose from a need clearly identified at a Mental Health Foundation conference in 1993, for a common measure to evaluate 'talking' therapies; to identify ways of measuring costs and benefits; and to assess the skills and competencies of therapists.This project set out to (1) identify and pilot a validated scale that could demonstrate the value of Relate; (2) develop and pilot a further measure of Relate's work, broader than the validated scale, in order to foster organisational learning and develop good practice. The pilot encountered a range of methodological challenges, and it is believed that recognition of these might smooth the way for any other similar organisation planning to develop an ongoing system for evaluation of counselling and relationship support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.