ArticlesArticles should deal wth topics applicable to the broad field of program evaluation. Articles may focus on evaluation methods, theory, practice, or findings. In all cases, implications for practicing evaluators should be clearly identified. Examples of contributions include, but are not limited to, reviews of new developments in evaluation, description of a current evaluation study, critical reviews of some area of evaluation practice, and presentations of important new techniques. Manuscripts should follow APA format for references and style. Length per se is not a criterion in evaluating submissions. ABSTRACTThe paper discusses two common scenarios in which evaluators must conduct impact evaluations when working under budget, time, or data constraints. Under the first scenario the evaluator is not called in until the project is already well advanced, and there is a tight deadline for completing the evaluation, frequently combined with a limited budget and without access to baseline data. Under the second scenario the evaluator is called in early, but for budget, political, or methodological reasons it is not possible to collect baseline data on a control group and sometimes not even on the project population. As a result of these constraints, many of the basic principles of impact evaluation design (comparable pretest-posttest design, control group, instrument development and testing, random sample selection, control for researcher bias, thorough documentation of the evaluation methodology, etc.) are often sacrificed. We describe the "Shoestring Evaluation" approach which is being developed to provide tools for ensuring the highest quality evaluation possible under constraints of limited budget, time and data availability. While most of the data collection and analysis techniques will be familiar to experienced evaluators, what is new is the Michael Bamberger • 6295 S.W. combination of these techniques into an integrated six-step approach which covers: (1) planning and scoping the evaluation, (2-4) options for dealing with constraints related to costs, time and data availability (which could include reconstructing baseline conditions and control groups), (5) identifying the strengths and weaknesses (threats to validity and adequacy) of the evaluation design, and (6) taking measures to address the threats and strengthen the evaluation design and conclusions. When necessary, many of these corrective measures can be introduced at a very late stage, even when the draft evaluation report has already been produced.
This paper reviews the main challenges and opportunities for incorporating mixed method approaches into research and evaluation on the effectiveness and impacts of international development. It draws on the authors' experience over several decades working in both academia and with a wide range of multilateral and bilateral development agencies, non-profit organisations and developing country governments on the evaluation of the effectiveness and impacts of development interventions. Development research is informed by current research trends in Northern countries, but it is often conducted within very distinct economic, political, cultural and organisational contexts. While certainly not unique to the international context, many development evaluations are subject to a range of budget, time, data, political and organisational constraints that tend to be more severe than those faced by researchers working in industrialised nations. Moreover, due to the more limited opportunities to conduct research in developing countries, individual studies or evaluations are often required to address a broader set of questions. So while a researcher in the US may be able to focus exclusively on a rigorous summative evaluation designed to address a limited range of questions on quantitative impacts, the same researcher evaluating a major development intervention in Latin America, Africa or Asia may be asked to address a wider range of summative and formative questions. We argue that the demand for multipurpose evaluations in developing countries opens up opportunities for a broader application of mixed method approaches than is usually the case in 'mainstream' mixed method research. We hope that this paper will help readers to understand the unique challenges, and also the great opportunities, for strengthening the use of mixed methods in the international development context.
International program evaluation is a booming business, with important and challenging evaluations of development programs being conducted in almost every country in the developing world. However, many U.S. domestic evaluation practitioners are not yet familiar with this field. Evaluators of international development programs normally must operate in a very different environment than one would expect to find when evaluating U.S. programs. These differences are discussed and a number of promising developments and methodological approaches are described here. I conclude by suggesting a number of areas in which a closer exchange of experiences between U.S. evaluation practitioners and their colleagues from developing countries could be mutually beneficial.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.