2016
DOI: 10.18296/em.0011
|View full text |Cite
|
Sign up to set email alerts
|

Developing a thoughtful approach to evaluation: Values-driven guidelines for novice evaluators

Abstract: and practice tensions inherent in programme evaluation. This article provides guidelines to assist novice evaluators in their journey to become "thoughtful" practitioners balancing methodological credibility with stakeholder empowerment. The approach is informed by an analysis both of key themes and tensions identified in the evaluation literature, and of the first author's experience as a novice evaluator contending with the difficulties of first articulating, and then advancing, the values she sought to embe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 28 publications
0
10
0
Order By: Relevance
“…Global debates on professional standards in evaluation practice are becoming more aware and inclusive of contextual assumptions and conditions. This is illustrated by the fact that regional evaluation guidelines are being codified and revised in ways that go beyond cosmetic adaptations of developed-country guidelines and rather start from local conceptions of core evaluation-related concepts such as the nature of the person, accountability, value, relevance, and impact, among others (Deane & Harre, 2016). One of the regions with an active debate on “indigenizing” or contextualizing evaluation theory, methods, and personnel is Africa, where an expanding and professionalizing evaluation sector is looking to define itself beyond its roots in the global aid industry and the still-dominant unequal power dynamics of international donor/local beneficiary relationships (Chilisa & Malunga, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…Global debates on professional standards in evaluation practice are becoming more aware and inclusive of contextual assumptions and conditions. This is illustrated by the fact that regional evaluation guidelines are being codified and revised in ways that go beyond cosmetic adaptations of developed-country guidelines and rather start from local conceptions of core evaluation-related concepts such as the nature of the person, accountability, value, relevance, and impact, among others (Deane & Harre, 2016). One of the regions with an active debate on “indigenizing” or contextualizing evaluation theory, methods, and personnel is Africa, where an expanding and professionalizing evaluation sector is looking to define itself beyond its roots in the global aid industry and the still-dominant unequal power dynamics of international donor/local beneficiary relationships (Chilisa & Malunga, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…Many express a need for a quick outcome evaluation to satisfy funder requirements. Pressuring stakeholders to engage in a participatory process risks the “empowerment evaluation paradox” (Deane and Harré, 2016), whereby our desire to authentically involve stakeholders can result in feelings of disempowerment if there is a lack of responsiveness to stakeholders’ other needs. At the same time, when done well, theory-driven evaluation and evaluability assessment can save costs and time, and lead to better evaluation design (Donaldson, 2003; Wholey, 2010).…”
Section: Discussionmentioning
confidence: 99%
“…We have not fully embraced empowerment evaluation as a model in part because of the evaluation context we operate in as detailed above. A resource-constrained climate can create a paradox with respect to the traditional approach to empowerment evaluation—extensive involvement can become disempowering because it does not align with stakeholder needs (Deane and Harré, 2016). In addition, critics of empowerment evaluation emphasize that relinquishing all control to stakeholders jeopardizes the credibility of the evaluative conclusions as these likely reflect self-interested biases (Stufflebeam, 1994); thus, in some stages of our approach, the evaluation team works independently.…”
Section: The Theoretical Foundations Of Our Integrative Approachmentioning
confidence: 99%
“…Moreover, CBA provides an indicator of efficiency, whereas DFID also requires consideration of equity. More broadly, multiple types of evidence and ways of gaining knowledge should inform evaluative judgements about complex social issues (Deane & Harré, 2016; Greene, 2005; Wehipeihana & McKegg, 2018). For example, qualitative evidence plays a critical role in MUVA’s VFM assessments, in describing the value of reflection and learning processes in adaptive programming.…”
Section: Reflectionmentioning
confidence: 99%