2021
DOI: 10.1016/j.obhdp.2021.02.003
|View full text |Cite
|
Sign up to set email alerts
|

Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
50
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 84 publications
(57 citation statements)
references
References 78 publications
1
50
0
Order By: Relevance
“…The idea of inviting different analysis teams to answer the same research question using the same data is relatively novel (Silberzahn and Uhlmann, 2015;see Aczel et al, 2021 for general guidelines); we are aware of three papers in neuroscience (Botvinik-Nezer et al, 2020;Fillard et al, 2011;Maier-Hein et al, 2017), one in microeconomics (Huntington-Klein et al, 2021), and eight in psychology, three of which pertain to cognitive modeling (Boehm et al, 2018;Dutilh et al, 2019;Starns et al, 2019) while the remaining five are from other fields of psychology (Bastiaansen et al, 2020;Salganik et al, 2020;Schweinsberg et al, 2021;Silberzahn et al, 2018;van Dongen et al, 2019). Most similar to the current work are the projects that applied a many-analysts approach to perform statistical inference on the relation between two variables, such as skin color and red cards in soccer (Silberzahn et al, 2018), scientist gender and verbosity (Schweinsberg et al, 2021), or amygdala activity and stress (van Dongen et al, 2019). While the exact focus of previous many-analysts projects varied (e.g., experience sampling, fMRI preprocessing, predictive modeling, proof of the many-analysts concept), the take-home messages were rather consistent: all papers showed that different yet equally justifiable analytic choices result in very different outcomes, sometimes with statistically significant effects in opposite directions (e.g., Schweinsberg et al, 2021;Silberzahn et al, 2018).…”
Section: A Many-analysts Approachmentioning
confidence: 99%
“…The idea of inviting different analysis teams to answer the same research question using the same data is relatively novel (Silberzahn and Uhlmann, 2015;see Aczel et al, 2021 for general guidelines); we are aware of three papers in neuroscience (Botvinik-Nezer et al, 2020;Fillard et al, 2011;Maier-Hein et al, 2017), one in microeconomics (Huntington-Klein et al, 2021), and eight in psychology, three of which pertain to cognitive modeling (Boehm et al, 2018;Dutilh et al, 2019;Starns et al, 2019) while the remaining five are from other fields of psychology (Bastiaansen et al, 2020;Salganik et al, 2020;Schweinsberg et al, 2021;Silberzahn et al, 2018;van Dongen et al, 2019). Most similar to the current work are the projects that applied a many-analysts approach to perform statistical inference on the relation between two variables, such as skin color and red cards in soccer (Silberzahn et al, 2018), scientist gender and verbosity (Schweinsberg et al, 2021), or amygdala activity and stress (van Dongen et al, 2019). While the exact focus of previous many-analysts projects varied (e.g., experience sampling, fMRI preprocessing, predictive modeling, proof of the many-analysts concept), the take-home messages were rather consistent: all papers showed that different yet equally justifiable analytic choices result in very different outcomes, sometimes with statistically significant effects in opposite directions (e.g., Schweinsberg et al, 2021;Silberzahn et al, 2018).…”
Section: A Many-analysts Approachmentioning
confidence: 99%
“…It is important to note that whenever the co-authors’ behavior is the subject of the study then they should be regarded similarly to human participants respecting ethical and data protection regulations. Useful templates for project advertisement and analyst surveys can be found in Silberzahn et al, 2018 ; Schweinsberg et al, 2021 .…”
Section: Multi-analyst Guidancementioning
confidence: 99%
“…As an extension, the co-analysts can be asked to record considered but rejected analysis choices and the reasoning behind their choices (e.g., by commented code, log-books, or dedicated solutions such as DataExplained [ Schweinsberg et al, 2021 ]). These logs can reflect where and why co-analysts diverge in their choices.…”
Section: Multi-analyst Guidancementioning
confidence: 99%
See 1 more Smart Citation
“…The problem is more relevant in social sciences, where the decisions of the modeller have a substantial effect on the results because of the higher indetermination in the definition of the prediction task (e.g., [23]). Likewise, when complex databases are used, different decisions may lead to opposite conclusions [24].…”
mentioning
confidence: 99%