2020
DOI: 10.1037/bul0000220
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing hypothesis tests: Making transparent how design choices shape research results.

Abstract: Author contributions: The 1 st through 4 th and last authors developed the research questions, oversaw the project, and contributed equally. The 1 st through 3 rd authors oversaw the Main Studies and Replication Studies, and the 4 th , 6 th , 7 th , and 8 th authors oversaw the Forecasting Study. The 1 st , 4 th , 5 th , 8 th , and 9 th authors conducted the primary analyses. The 10 th through 15 th authors conducted the Bayesian analyses. The first and 16 th authors conducted the multivariate meta-analysis.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
120
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 156 publications
(139 citation statements)
references
References 172 publications
(194 reference statements)
3
120
0
Order By: Relevance
“…Further, our finding that relative need was the primary determinant of generosity among Indonesian and Bangladeshi participants mirrors findings from other economic games conducted in rural Fiji (58). Nonetheless, all operationalizations are imperfect and even seemingly arbitrary differences between protocols can generate dramatically different results (21). We know embarrassingly little about what construct is captured by the typical measure of generosity used in this study (dichotomous choices between amounts of currency for self-versus other) and how well it correlates with different alternative operationalizations.…”
Section: Discussionsupporting
confidence: 76%
See 1 more Smart Citation
“…Further, our finding that relative need was the primary determinant of generosity among Indonesian and Bangladeshi participants mirrors findings from other economic games conducted in rural Fiji (58). Nonetheless, all operationalizations are imperfect and even seemingly arbitrary differences between protocols can generate dramatically different results (21). We know embarrassingly little about what construct is captured by the typical measure of generosity used in this study (dichotomous choices between amounts of currency for self-versus other) and how well it correlates with different alternative operationalizations.…”
Section: Discussionsupporting
confidence: 76%
“…Those that have directly engaged with concerns about generalizability focus largely on experimental design and statistical analysis. For instance, radical randomization of experimental parameters (20) and crowdsourcing operationalizations of theoretical constricts (21) and analytical choices (22) have all been proposed as ways to reveal how effects vary due to arbitrary choices that researchers make when designing studies. Here we focus on another longstanding proposal to improve generalizability: increasing sample diversity (18,23).…”
Section: Moving Generalizability Into the Limelightmentioning
confidence: 99%
“…As a field, we must embrace examining problems from multiple angles and accommodate conflicting perspectives. As we have recently seen, design (Landy et al, 2019) and analysis (Silberzahn et al, 2018) choices have a dramatic impact on results. These findings not only highlight the need to be open and detailed about our design choices but also point to the need for more studies that evaluate the same research question using different methods (Fiedler, 2017).…”
Section: Fresh Perspectivesmentioning
confidence: 99%
“…Critically, authors can continuously revise their manuscripts based on the comments received, and the platform will maintain all versions of a manuscript, allowing scientists (and the public) to see how manuscripts evolve over months or even decades. Finally, nimbler publishing platforms designed for the digital age could crowdsource even peer-reviewing (for crowdsourcing research, see Landy et al, 2020;Uhlmann et al, 2019), allowing for continuous, rich, and scholarly discussions while ensuring transparency and rigor (e.g., Stern & O'Shea, 2019).…”
Section: Promises and Perils Of Experimentation 16mentioning
confidence: 99%