research question and the quality of the methodology, not whether the findings are positive, novel, and clean.More than 250 journals have adopted RRs since 2013 on the theorized promise of improving rigor and credibility. Initial evidence suggests that RRs are (1) effective at mitigating publication bias with a sharp increase in publishing negative results compared to the standard model 26,27 , and (2) cited as often or even more than other articles in the same journals 28 .However, there is no evidence about whether scholars perceive RRs to have higher, lower, or similar research quality compared with papers published in the standard model. The RR format could also have costs such as authors pursuing less interesting questions or conducting less novel or creative research 29,30 .We conducted an observational investigation of perceptions of the quality and importance of RRs compared to the standard model across a variety of outcome criteria. We recruited 353 researchers to each peer review a pair of papers, one from 29 RRs from psychology and neuroscience and one from 57 matched non-RR comparison papers.Comparison papers addressed similar topics, about half were by the same first or corresponding authors and about half were published in the same journal. RRs is a popular format for replication studies 3,31 , but replications are rare in the standard model so we excluded replication RRs. Researchers were assigned to papers according to their self-reported expertise based on the papers' keywords. Researchers self-reported that they were qualified to review the papers on average (N=353; RR M=3.74, SD=1.02; Comparison paper M=3.59, SD=1.07; Range 1 [not at all qualified] to 5 [substantially qualified]). Reviewers evaluated 19 outcome criteria including quality, rigor, novelty, creativity, and importance of the methodology and outcomes of the papers. In some RRs, authors submitted preliminary studies as initial evidence supporting the approach of the proposed last study that was peer reviewed before the findings were known. If Supplementary Table 10Article keywords included in the survey sample.
Psychological science’s “credibility revolution” has produced an explosion of metascientific work on improving research practices. Although much attention has been paid to replicability (reducing false positives), improving credibility depends on addressing a wide range of problems afflicting psychological science, beyond simply making psychology research more replicable. Here we focus on the “four validities” and highlight recent developments—many of which have been led by early-career researchers—aimed at improving these four validities in psychology research. We propose that the credibility revolution in psychology, which has its roots in replicability, can be harnessed to improve psychology’s validity more broadly.
Registered Reports (RRs) is a publishing model in which initial peer review happens before the research is completed. In-principle acceptance before knowing outcomes combats publication bias and provides a clear distinction between confirmatory and exploratory research. The theoretical case for how RRs would improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs of RRs such as reducing innovation or novelty. 353 researchers peer reviewed a pair of papers from 29 published RRs and 57 non-RR comparison papers. RRs outperformed comparison papers on all 19 criteria (mean difference=.46) with effects ranging from little difference in novelty (0.13) and creativity (0.22) to substantial differences in rigor of methodology (0.99) and analysis (0.97) and overall paper quality (0.66). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.
Public reactions to protests are often divided, with some viewing the protest as a legitimate response to injustice and others perceiving the protest as illegitimate. We examine how online news sources oriented to different audiences frame protest, potentially encouraging these divergent reactions. We focus on online news coverage following the 2014 police shooting of a Black teenager, Michael Brown. Preregistered analyses of headlines and images and their captions showed that sources oriented toward African Americans were more likely to include content conveying racial injustice and legitimacy of the subsequent protests than sources oriented toward a general audience. In contrast, general audience sources emphasized conflict between protesters and police, making fewer references to the protesters’ cause. Whereas much work on media segregation addresses the propensity of audiences to consume different sources, our work suggests that news sources may also contribute to information fragmentation by differentially framing the same events.
What research practices should be considered acceptable? Historically, scientists have set the standards for what constitutes acceptable research practices. However, there is value in considering non-scientists’ perspectives, including research participants'. 1873 participants from MTurk and university subject pools were surveyed after their participation in one of eight minimal-risk studies. We asked participants how they would feel if (mostly) common research practices were applied to their data: p -hacking/cherry-picking results, selective reporting of studies, Hypothesizing After Results are Known (HARKing), committing fraud, conducting direct replications, sharing data, sharing methods, and open access publishing. An overwhelming majority of psychology research participants think questionable research practices (e.g. p -hacking, HARKing) are unacceptable (68.3–81.3%), and were supportive of practices to increase transparency and replicability (71.4–80.1%). A surprising number of participants expressed positive or neutral views toward scientific fraud (18.7%), raising concerns about data quality. We grapple with this concern and interpret our results in light of the limitations of our study. Despite the ambiguity in our results, we argue that there is evidence (from our study and others’) that researchers may be violating participants' expectations and should be transparent with participants about how their data will be used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.