2014
DOI: 10.1111/1759-5436.12112
|View full text |Cite
|
Sign up to set email alerts
|

Things you Wanted to Know about Bias in Evaluations but Never Dared to Think

Abstract: The thrust for evidence-based policymaking has paid little attention to problems of bias. Statistical evidence is more fragile than generally understood, and false positives are all too likely given the incentives of policymakers and academic and professional evaluators. Well-known cognitive biases make bias likely for not dissimilar reasons in qualitative and mixed methods evaluations. What we term delinquent organisational isomorphism promotes purportedly scientific evaluations in inappropriate institutional… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 57 publications
0
13
0
Order By: Relevance
“…False positives can be caused by questionable statistical practices (called 'p-hacking', 'fiddling', 'spin', or 'data massage') that turn a negative result into a positive result (Boutron et al, 2010;Chan et al, 2014;Dwan et al, 2008;Kirkham et al, 2010;Simmons et al, 2011). Selectively removing particular outliers, undisclosed data dredging, selective stopping, incorrect rounding down of p-values, or trying out various statistical tests and subsequently reporting only the most significant outcomes, are some of the mechanisms that can cause false positives to appear in a publication (e.g., Bakker & Wicherts, 2014;Camfield et al, 2014;Goodman, 2014;Leggett et al, 2013;Simmons et al, 2011;Strube, 2006). Yet another mechanism that could cause spurious positive results is data fabrication.…”
Section: Introduction 1the Abundance Of Positive Results In the Sciementioning
confidence: 99%
“…False positives can be caused by questionable statistical practices (called 'p-hacking', 'fiddling', 'spin', or 'data massage') that turn a negative result into a positive result (Boutron et al, 2010;Chan et al, 2014;Dwan et al, 2008;Kirkham et al, 2010;Simmons et al, 2011). Selectively removing particular outliers, undisclosed data dredging, selective stopping, incorrect rounding down of p-values, or trying out various statistical tests and subsequently reporting only the most significant outcomes, are some of the mechanisms that can cause false positives to appear in a publication (e.g., Bakker & Wicherts, 2014;Camfield et al, 2014;Goodman, 2014;Leggett et al, 2013;Simmons et al, 2011;Strube, 2006). Yet another mechanism that could cause spurious positive results is data fabrication.…”
Section: Introduction 1the Abundance Of Positive Results In the Sciementioning
confidence: 99%
“…To some extent, the same biases apply to medical providers. Importantly, single blindness can favor-often subconscious-investigators' self-serving biases (Camfield et al, 2014), hence the advantage of double over single blindness. This argument is particularly relevant for soft or subjective tested outcomes, which are common in social sciences.…”
Section: Equipoise Vs Blindnessmentioning
confidence: 99%
“…Many economics analyses are underpowered (Ioannidis & Doucouliagos, 2013;Ioannidis et al, 2016), leading to non-trivial risks of false positives (Button et al, 2013), which can be exacerbated by researcher or publication biases (Camfield, Duvendack, & Palmer-Jones, 2014;Maniadis, Tufano, & List, 2014).…”
Section: Statistical Power Of the Sari Analysismentioning
confidence: 99%