There is growing consensus that psychology has a replicability "crisis of confidence" [70,69,89,66,67], stemming from the fact that a growing number of findings cannot be replicated via high-powered independent replication attempts that duplicate the original methodology as closely as possible. Across all areas of psychology, there is a growing list of (prominent) findings that have not held up to independent replication attempts, including findings from cognitive psychology (retrieval- [38,17,51,18,31,57,72,71,94,44,46,52,34]).More generalizable evidence supporting a general replicability problem comes from an ambitious and unprecedented large-scale crowdsourced project, the Reproducibility Project [66,67]. In this project, researchers were unable to replicate about 60% (out of 100) of findings from the 2008 issues of Psychological Science, Journal of Personality and Social Psychology, and Journal of Experimental Psychology: Learning, Memory, and Cognition [68]. In another large-scale meta-scientific investigation, about 70% (16 out of 23) of important findings from cognitive and social psychology could also not be replicated [64]. Though there are different ways to interpret successful versus unsuccessful replication results [91], taken together, these observations strongly suggest psychology currently has a general replicability problem (as do several other areas of science including cancer cell biology and cardiovascular health literatures [8,75]).
New Initiatives and ReformsSeveral new initiatives have been launched to improve research practices in order to increase the reliability of findings in psychology. For instance, higher reporting standards have recently been instituted at several prominent psychology journals [21,87,88,47]. At such journals (e.g., Psychological Science, Memory & Cognition, Attention,
ORIGINAL RESEARCH REPORT
A New Replication Norm for PsychologyEtienne P. LeBel * In recent years, there has been a growing concern regarding the replicability of findings in psychology, including a mounting number of prominent findings that have failed to replicate via high-powered independent replication attempts. In the face of this replicability "crisis of confidence", several initiatives have been implemented to increase the reliability of empirical findings. In the current article, I propose a new replication norm that aims to further boost the dependability of findings in psychology. Paralleling the extant social norm that researchers should peer review about three times as many articles that they themselves publish per year, the new replication norm states that researchers should aim to independently replicate important findings in their own research areas in proportion to the number of original studies they themselves publish per year (e.g., a 4:1 original-to-replication studies ratio). I argue this simple approach could significantly advance our science by increasing the reliability and cumulative nature of our empirical knowledge base, accelerating our theoretical understanding of psycholo...