research question and the quality of the methodology, not whether the findings are positive, novel, and clean.More than 250 journals have adopted RRs since 2013 on the theorized promise of improving rigor and credibility. Initial evidence suggests that RRs are (1) effective at mitigating publication bias with a sharp increase in publishing negative results compared to the standard model 26,27 , and (2) cited as often or even more than other articles in the same journals 28 .However, there is no evidence about whether scholars perceive RRs to have higher, lower, or similar research quality compared with papers published in the standard model. The RR format could also have costs such as authors pursuing less interesting questions or conducting less novel or creative research 29,30 .We conducted an observational investigation of perceptions of the quality and importance of RRs compared to the standard model across a variety of outcome criteria. We recruited 353 researchers to each peer review a pair of papers, one from 29 RRs from psychology and neuroscience and one from 57 matched non-RR comparison papers.Comparison papers addressed similar topics, about half were by the same first or corresponding authors and about half were published in the same journal. RRs is a popular format for replication studies 3,31 , but replications are rare in the standard model so we excluded replication RRs. Researchers were assigned to papers according to their self-reported expertise based on the papers' keywords. Researchers self-reported that they were qualified to review the papers on average (N=353; RR M=3.74, SD=1.02; Comparison paper M=3.59, SD=1.07; Range 1 [not at all qualified] to 5 [substantially qualified]). Reviewers evaluated 19 outcome criteria including quality, rigor, novelty, creativity, and importance of the methodology and outcomes of the papers. In some RRs, authors submitted preliminary studies as initial evidence supporting the approach of the proposed last study that was peer reviewed before the findings were known. If Supplementary Table 10Article keywords included in the survey sample.
Psychological science’s “credibility revolution” has produced an explosion of metascientific work on improving research practices. Although much attention has been paid to replicability (reducing false positives), improving credibility depends on addressing a wide range of problems afflicting psychological science, beyond simply making psychology research more replicable. Here we focus on the “four validities” and highlight recent developments—many of which have been led by early-career researchers—aimed at improving these four validities in psychology research. We propose that the credibility revolution in psychology, which has its roots in replicability, can be harnessed to improve psychology’s validity more broadly.
Registered Reports (RRs) is a publishing model in which initial peer review happens before the research is completed. In-principle acceptance before knowing outcomes combats publication bias and provides a clear distinction between confirmatory and exploratory research. The theoretical case for how RRs would improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs of RRs such as reducing innovation or novelty. 353 researchers peer reviewed a pair of papers from 29 published RRs and 57 non-RR comparison papers. RRs outperformed comparison papers on all 19 criteria (mean difference=.46) with effects ranging from little difference in novelty (0.13) and creativity (0.22) to substantial differences in rigor of methodology (0.99) and analysis (0.97) and overall paper quality (0.66). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.
Every research project has limitations. The limitations that authors acknowledge in their articles offer a glimpse into some of the concerns that occupy a field’s attention. We examine the types of limitations authors discuss in their published articles by categorizing them according to the four validities framework and investigate whether the field’s attention to each of the four validities has shifted from 2010 to 2020. We selected one journal in social and personality psychology (Social Psychological and Personality Science; SPPS), the subfield most in the crosshairs of psychology’s replication crisis. We sampled 440 articles (with half of those articles containing a subsection explicitly addressing limitations), and we identified and categorized 831 limitations across the 440 articles. Articles with limitations sections reported more limitations than those without (avg. 2.6 vs. 1.2 limitations per article). Threats to external validity were the most common type of reported limitation (est. 52% of articles), and threats to statistical conclusion validity were the least common (est. 17% of articles). Authors reported slightly more limitations over time. Despite the extensive attention paid to statistical conclusion validity in the scientific discourse throughout psychology’s credibility revolution, our results suggest that concerns about statistics-related issues were not reflected in social and personality psychologists’ reported limitations. The high prevalence of limitations concerning external validity might suggest it is time that we improve our practices in this area, rather than apologizing for these limitations after the fact.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.