In response to recommendations to redefine statistical significance to p ≤ .005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level.
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
There has been low confidence in the replicability and reproducibility of published psychological findings. Previous work has demonstrated that a popula4on of psychologists exists that have used ques4onable research prac4ces (QRPs), or behaviors during data collec4on, analysis, and publica4on that can increase the number of false-posi4ve findings in the scien4fic literature.The present work sought to es4mate the current size of the QRP using popula4on of American psychologists and to iden4fy if this sub-popula4on of scien4sts is s4gma4zed. Using a direct es4mator, we es4mate 18.8% of American psychologists have used at least one QRP in the past 12 months. This es4mate rises to 24.40% when using the generalized network scale up es4mator, an es4ma4ng method that u4lizes the academic social networks of par4cipants.Furthermore, a[tudes of psychologists towards QRP users, and observed behavioral data collected from self-reported QRP users suggest that QRP users are a s4gma4zed sub-popula4on of psychologists. Together, these findings provide be#er insight into how many psychologists use ques4onable prac4ces and how they exist in the social environment.
Replication is an important “credibility control” mechanism for clarifying the reliability of published findings. However, replication is costly, and it is infeasible to replicate everything. Accurate, fast, lower cost alternatives such as eliciting predictions from experts or novices could accelerate credibility assessment and improve allocation of replication resources for important and uncertain findings. We elicited judgments from experts and novices on 100 claims from preprints about an emerging area of research (COVID-19 pandemic) using a new interactive structured elicitation protocol and we conducted 35 new replications. Participants’ average estimates were similar to the observed replication rate of 60%. After interacting with their peers, novices updated both their estimates and confidence in their judgements significantly more than experts and their accuracy improved more between elicitation rounds. Experts’ average accuracy was 0.54 (95% CI: [0.454, 0.628]) after interaction and they correctly classified 55% of claims; novices’ average accuracy was 0.55 (95% CI: [0.455, 0.628]), correctly classifying 61% of claims. The difference in accuracy between experts and novices was not significant and their judgments on the full set of claims were strongly correlated (r=.48). These results are consistent with prior investigations eliciting predictions about the replicability of published findings in established areas of research and suggest that expertise may not be required for credibility assessment of some research findings.
Academic publishing has changed substantially in the past 30 years due to the advent of the internet. Unlike print publications, digital publications have the ability to provide additional information to visitors about the publication and previous readers via metadata like download counts. This study investigated the effect of this metadata on the development of download inequality and unpredictability of success in an experimental academic literature marketplace. We found that presence of an accurate download count increased inequality in article downloads, meaning fewer papers accumulated a larger share of the total download count. We also found that the presence of download count increased the unpredictability of success, meaning across identical instances, different papers became the most popular. Finally, an exploratory analysis found papers were more highly rated when download counts were present. Together, these findings provide insight into how the download behaviors of previous academic readers may influence literature choice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.