2021
DOI: 10.1111/ecin.12992
|View full text |Cite
|
Sign up to set email alerts
|

The influence of hidden researcher decisions in applied microeconomics

Abstract: Researchers make hundreds of decisions about data collection, preparation, and analysis in their research. We use a many-analysts approach to measure the extent and impact of these decisions. Two published causal empirical results are replicated by seven replicators each. We find large differences in data preparation and analysis decisions, many of which would not likely be reported in a publication. No two replicators reported the same sample size.Statistical significance varied across replications, and for o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
38
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(58 citation statements)
references
References 35 publications
1
38
0
Order By: Relevance
“…The 'reproducibility crisis' discussions suggest that several biases enter the process determining which studies get published(25) If true and generalizable, our findings demonstrate how a non-trivial segment of researchers' findings could come to dominate a field, even if they are a highly selective subset of all potential research (e.g., only those 15% that found support of a hypothesis). Therefore, we support recommendations of previous many analysts studies calling for greater 'uncertainty literacy' among both producers and consumers of science, clearer research questions and hypotheses and more transparency from both scientists (not only for sharing knowledge but here to inform consumers about the hidden universe we identify) and gatekeepers of publications and funding (14,16,18,20,21).…”
Section: Implications and Limitationssupporting
confidence: 82%
See 1 more Smart Citation
“…The 'reproducibility crisis' discussions suggest that several biases enter the process determining which studies get published(25) If true and generalizable, our findings demonstrate how a non-trivial segment of researchers' findings could come to dominate a field, even if they are a highly selective subset of all potential research (e.g., only those 15% that found support of a hypothesis). Therefore, we support recommendations of previous many analysts studies calling for greater 'uncertainty literacy' among both producers and consumers of science, clearer research questions and hypotheses and more transparency from both scientists (not only for sharing knowledge but here to inform consumers about the hidden universe we identify) and gatekeepers of publications and funding (14,16,18,20,21).…”
Section: Implications and Limitationssupporting
confidence: 82%
“…At the same time and similar to other same data and hypothesis studies, we found that removing publication and other biasing incentives still leaves us confronted with a universe of decisions and outcomes. Thus, our findings suggest that other recommendations from these studies such as greater standardization of data preparation, model averaging, more replications and usage of multiverse simulations will not necessarily reduce uncertainty given a hidden universe of choices that cannot be easily simulated (16,18,21,26). These choices will likely outweigh the number of researchers actively working in a given field testing a given hypothesis.…”
Section: Implications and Limitationsmentioning
confidence: 91%
“…If a research team has a large amount of flexibility to define how it will approach a particular hypothesis, study registration may not be sufficient to avoid the criticism of "hypothesizing after the results are known," also known as HARKing (Kerr 1998). Examples of such flexibility include a broad range of concrete measures that could each be argued to measure an abstract concept, choices about sample inclusion or exclusion, or decisions about how to construct derived indicators (Huntington-Klein et al 2021). When researchers are collecting a large amount of information and have leverage over even a moderate number of these options, it is often possible to obtain almost any desired result (Gelman and Loken 2013).…”
Section: Writing Preanalysis Plansmentioning
confidence: 99%
“…In another study, 73 teams used the same data to test a single hypothesis; the "tremendous variation" in conclusions led the researchers to conclude a "vast universe of research design variability normally hidden from view" [2]. These and other studies [3][4][5][6][7][8] demonstrate that data is not transparent but requires interpretation. Yet the biases that impinge upon the interpretation are not well understood.…”
Section: Introductionmentioning
confidence: 99%