The reporting of evaluation outcomes can be a point of contention between evaluators and policy-makers when a given reform fails to fulfil its promises. Whereas evaluators are required to report outcomes in full, policy-makers have a vested interest in framing these outcomes in a positive light–especially when they previously expressed a commitment to the reform. The current evidence base is limited to a survey of policy evaluators, a study on reporting bias in education research and several studies investigating the influence of industry sponsorship on the reporting of clinical trials. The objective of this study was twofold. Firstly, it aimed to assess the risk of outcome reporting bias (ORB or ‘spin’) in pilot evaluation reports, using seven indicators developed by clinicians. Secondly, it sought to examine how the government’s commitment to a given reform may affect the level of ORB found in the corresponding evaluation report. To answer these questions, 13 evaluation reports were content-analysed, all of which found a non-significant effect of the intervention on its stated primary outcome. These reports were systematically selected from a dataset of 233 pilot and experimental evaluations spanning three policy areas and 13 years of government-commissioned research in the UK. The results show that the risk of ORB is real. Indeed, all studies reviewed here resorted to at least one of the presentational strategies associated with a risk of spin. This study also found a small, negative association between the seniority of the reform’s champion and the risk of ORB in the evaluation of that reform. The publication of protocols and the use of reporting guidelines are recommended.
For pilot or experimental employment programme results to apply beyond their test bed, researchers must select ‘clusters’ (i.e. the job centres delivering the new intervention) that are reasonably representative of the whole territory. More specifically, this requirement must account for conditions that could artificially inflate the effect of a programme, such as the fluidity of the local labour market or the performance of the local job centre. Failure to achieve representativeness results in Cluster Sampling Bias (CSB). This paper makes three contributions to the literature. Theoretically, it approaches the notion of CSB as a human behaviour. It offers a comprehensive theory, whereby researchers with limited resources and conflicting priorities tend to oversample ‘effect-enhancing’ clusters when piloting a new intervention. Methodologically, it advocates for a ‘narrow and deep’ scope, as opposed to the ‘wide and shallow’ scope, which has prevailed so far. The PILOT-2 dataset was developed to test this idea. Empirically, it provides evidence on the prevalence of CSB. In conditions similar to the PILOT-2 case study, investigators (1) do not sample clusters with a view to maximise generalisability; (2) do not oversample ‘effect-enhancing’ clusters; (3) consistently oversample some clusters, including those with higher-than-average client caseloads; and (4) report their sampling decisions in an inconsistent and generally poor manner. In conclusion, although CSB is prevalent, it is still unclear whether it is intentional and meant to mislead stakeholders about the expected effect of the intervention or due to higher-level constraints or other considerations.
There could be few scholars better placed than Peter Saunders to attempt the kind of synthesis that is this book's ambition and contribution. Long a resilient and bridge-building researcher, Saunders has always insisted on the importance of blending concepts and perspectives from different disciplines and intellectual traditions. He has also been a major force in moving poverty research beyond its tendency to fasten upon new approaches − poverty lines, compound disadvantage, social exclusion or deprivation − as if each somehow replaces the last. He has also grasped, to a greater extent than many other researchers, the importance of listening to the
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.