Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.
We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries. T he lack of reproducibility of scientific studies has caused growing concern over the credibility of claims of new discoveries based on 'statistically significant' findings. There has been much progress toward documenting and addressing several causes of this lack of reproducibility (for example, multiple testing, P-hacking, publication bias and under-powered studies). However, we believe that a leading cause of non-reproducibility has not yet been adequately addressed: statistical standards of evidence for claiming new discoveries in many fields of science are simply too low. Associating statistically significant findings with P < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems.For fields where the threshold for defining statistical significance for new discoveries is P < 0.05, we propose a change to P < 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields. Results that would currently be called significant but do not meet the new threshold should instead be called suggestive. While statisticians have known the relative weakness of using P ≈ 0.05 as a threshold for discovery and the proposal to lower it to 0.005 is not new 1,2 , a critical mass of researchers now endorse this change.We restrict our recommendation to claims of discovery of new effects. We do not address the appropriate threshold for confirmatory or contradictory replications of existing claims. We also do not advocate changes to discovery thresholds in fields that have already adopted more stringent standards (for example, genomics and high-energy physics research; see the 'Potential objections' section below).We also restrict our recommendation to studies that conduct null hypothesis significance tests. We have diverse views about how best to improve reproducibility, and many of us believe that other ways of summarizing the data, such as Bayes factors or other posterior summaries based on clearly articulated model assumptions, are preferable to P values. However, changing the P value threshold is simple, aligns with the training undertaken by many researchers, and might quickly achieve broad acceptance.
Response inhibition is essential for navigating everyday life. Its derailment is considered integral to numerous neurological and psychiatric disorders, and more generally, to a wide range of behavioral and health problems. Response-inhibition efficiency furthermore correlates with treatment outcome in some of these conditions. The stop-signal task is an essential tool to determine how quickly response inhibition is implemented. Despite its apparent simplicity, there are many features (ranging from task design to data analysis) that vary across studies in ways that can easily compromise the validity of the obtained results. Our goal is to facilitate a more accurate use of the stop-signal task. To this end, we provide 12 easy-to-implement consensus recommendations and point out the problems that can arise when they are not followed. Furthermore, we provide user-friendly open-source resources intended to inform statistical-power considerations, facilitate the correct implementation of the task, and assist in proper data analysis.
BackgroundExaggerated or simplistic news is often blamed for adversely influencing public health. However, recent findings suggested many exaggerations were already present in university press releases, which scientists approve. Surprisingly, these exaggerations were not associated with more news coverage. Here we test whether these two controversial results also arise in press releases from prominent science and medical journals. We then investigate the influence of mitigating caveats in press releases, to test assumptions that caveats harm news interest or are ignored.Methods and FindingsUsing quantitative content analysis, we analyzed press releases (N = 534) on biomedical and health-related science issued by leading peer-reviewed journals. We similarly analysed the associated peer-reviewed papers (N = 534) and news stories (N = 582). Main outcome measures were advice to readers and causal statements drawn from correlational research. Exaggerations in press releases predicted exaggerations in news (odds ratios 2.4 and 10.9, 95% CIs 1.3 to 4.5 and 3.9 to 30.1) but were not associated with increased news coverage, consistent with previous findings. Combining datasets from universities and journals (996 press releases, 1250 news), we found that when caveats appeared in press releases there was no reduction in journalistic uptake, but there was a clear increase in caveats in news (odds ratios 9.6 and 9.5 for caveats for advice and causal claims, CIs 4.1 to 24.3 and 6.0 to 15.2). The main study limitation is its retrospective correlational nature.ConclusionsFor health and science news directly inspired by press releases, the main source of both exaggerations and caveats appears to be the press release itself. However we find no evidence that exaggerations increase, or caveats decrease, the likelihood of news coverage. These findings should be encouraging for press officers and scientists who wish to minimise exaggeration and include caveats in their press releases.
In the present study, we examined the effect of wins and losses on impulsive action in gambling (Experiments 1–3) and nongambling tasks (Experiments 4–5). In each experiment, subjects performed a simple task in which they had to win points. On each trial, they had to choose between a gamble and a nongamble. The gamble was always associated with a higher amount but a lower probability of winning than the nongamble. After subjects indicated their choice (i.e., gamble or not), feedback was presented. They had to press a key to start the next trial. Experiments 1–3 showed that, compared to the nongambling baseline, subjects were faster to initiate the next trial after a gambled loss, indicating that losses can induce impulsive actions. In Experiments 4 and 5, subjects alternated between the gambling task and a neutral decision-making task in which they could not win or lose points. Subjects were faster in the neutral decision-making task if they had just lost in the gambling task, suggesting that losses have a general effect on action. Our results challenge the dominant idea that humans become more cautious after suboptimal outcomes. Instead, they indicate that losses in the context of potential rewards are emotional events that increase impulsivity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.