The scientific community has witnessed growing concern about the high rate of false positives and unreliable results within the psychological literature, but the harmful impact of false negatives has been largely ignored. False negatives are particularly concerning in research areas where demonstrating the absence of an effect is crucial, such as studies of unconscious or implicit processing. Research on implicit processes seeks evidence of above-chance performance on some implicit behavioral measure at the same time as chance-level performance (that is, a null result) on an explicit measure of awareness. A systematic review of 73 studies of contextual cuing, a popular implicit learning paradigm, involving 181 statistical analyses of awareness tests, reveals how underpowered studies can lead to failure to reject a false null hypothesis. Among the studies that reported sufficient information, the meta-analytic effect size across awareness tests was dz = 0.31 (95 % CI 0.24–0.37), showing that participants’ learning in these experiments was conscious. The unusually large number of positive results in this literature cannot be explained by selective publication. Instead, our analyses demonstrate that these tests are typically insensitive and underpowered to detect medium to small, but true, effects in awareness tests. These findings challenge a widespread and theoretically important claim about the extent of unconscious human cognition.
In response to recommendations to redefine statistical significance to p ≤ .005, we propose that researchers should transparently report and justify all choices they make when designing a study, including the alpha level.
Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.
Enthusiasm for research on the brain and its application in education is growing among teachers. However, a lack of sufficient knowledge, poor communication between educators and scientists, and the effective marketing of dubious educational products has led to the proliferation of numerous ‘neuromyths.’ As a first step toward designing effective interventions to correct these misconceptions, previous studies have explored the prevalence of neuromyths in different countries. In the present study we extend this applied research by gathering data from a new sample of Spanish teachers and by meta-analyzing all the evidence available so far. Our results show that some of the most popular neuromyths identified in previous studies are also endorsed by Spanish teachers. The meta-analytic synthesis of these data and previous research confirms that the popularity of some neuromyths is remarkably consistent across countries, although we also note peculiarities and exceptions with important implications for the development of effective interventions. In light of the increasing popularity of pseudoscientific practices in schools worldwide, we suggest a set of interventions to address misconceptions about the brain and education.
Impaired procedural learning has been suggested as a possible cause of developmental dyslexia (DD) and specific language impairment (SLI). This study examined the relationship between measures of verbal and non‐verbal implicit and explicit learning and measures of language, literacy and arithmetic attainment in a large sample of 7 to 8‐year‐old children. Measures of verbal explicit learning were correlated with measures of attainment. In contrast, no relationships between measures of implicit learning and attainment were found. Critically, the reliability of the implicit learning tasks was poor. Our results show that measures of procedural learning, as currently used, are typically unreliable and insensitive to individual differences. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=YnvV-BvNWSo
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.