2022
DOI: 10.1146/annurev-psych-020821-114157
|View full text |Cite
|
Sign up to set email alerts
|

Replicability, Robustness, and Reproducibility in Psychological Science

Abstract: Replication—an important, uncommon, and misunderstood practice—is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur inn… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
244
0
6

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 403 publications
(347 citation statements)
references
References 239 publications
(231 reference statements)
3
244
0
6
Order By: Relevance
“…Large-scale replication studies in the social and behavioral sciences provide evidence of replicability challenges ( Camerer et al, 2016 ; Camerer et al, 2018 ; Ebersole et al, 2016 ; Ebersole et al, 2020 ; Klein et al, 2014 ; Klein et al, 2018 ; Open Science Collaboration, 2015 ). In psychology, across 307 systematic replications and multisite replications, 64% reported statistically significant evidence in the same direction and effect sizes 68% as large as the original experiments ( Nosek et al, 2021 ).…”
Section: Introductionmentioning
confidence: 83%
See 1 more Smart Citation
“…Large-scale replication studies in the social and behavioral sciences provide evidence of replicability challenges ( Camerer et al, 2016 ; Camerer et al, 2018 ; Ebersole et al, 2016 ; Ebersole et al, 2020 ; Klein et al, 2014 ; Klein et al, 2018 ; Open Science Collaboration, 2015 ). In psychology, across 307 systematic replications and multisite replications, 64% reported statistically significant evidence in the same direction and effect sizes 68% as large as the original experiments ( Nosek et al, 2021 ).…”
Section: Introductionmentioning
confidence: 83%
“…The clearest indicator of replication success was that smaller effects were less likely to replicate than larger effects, and this was particularly notable for animal experiments because they tended to have smaller original effect sizes than did non-animal experiments. Research into replicability in other disciplines has also found that findings with stronger initial evidence (such as larger effect sizes and/or smaller p-values) is more likely to replicate ( Nosek et al, 2021 ; Open Science Collaboration, 2015 ), and it may be worth exploring if other findings from other disciplines – such as more surprising findings being less likely to replicate – can be generalized to cancer biology. There are also unique qualities of research in cancer biology that could be related to replicability, and a number of ongoing projects exploring replication in preclinical research ( Amaral et al, 2019 ; Drude et al, 2021 ) will add to the data presented here and increase our understanding of replication and translational success ( Chakroborty et al, 2020 ).…”
Section: Discussionmentioning
confidence: 99%
“…Finally, our findings were nearly identical for both samples, which underlines the replicability of our results. This fact is particularly noteworthy given the “replication crisis” and increasing claims for more replicable, robust, and reproducible research in psychological science ( 36 ). However, the interactive effect with extraversion was found in the DREAM but not in the MARI sample, which might be due to the larger sample size.…”
Section: Discussionmentioning
confidence: 99%
“…Gender differences between mothers and fathers were tested exploratorily. The findings from both studies were compared to evaluate the replicability, robustness, and reproducibility of the effects ( 36 ).…”
Section: Introductionmentioning
confidence: 99%
“…We note that there are no agreed upon similar guidelines to evaluate the strength of scientific evidence of a specific theory, model or phenomenon. However, in the last ten years, as a consequence of the reproducibility or replicability crisis that impacted many of the scientific fields (e.g., psychology, economics, neuroscience) (Wikipedia contributors, 2021; Nosek et al, 2022), direct or conceptual replications (LeBel, Vanpaemel, Cheung, & Campbell, 2019;Schmidt, 2009), that is, studies using the same experimental design, sample characteristics, materials and procedures or with variations testing the generalizability of previous findings, are considered the gold-standard for supporting the reality of a phenomenon. This recommendation supports the hierarchy presented in Figure 1: i.e., that systematic reviews and meta-analyses must be considered the strongest tools for supporting the strength of scientific evidence.…”
Section: Introductionmentioning
confidence: 99%