Central to understanding human behavior is a comprehensive mapping of brain-behavior relations within the context of lifespan development. Reproducible discoveries depend upon well-powered samples of reliable data. We provide to the scientific community two, 10-minute, multi-echo functional MRI (ME-fMRI) runs, and structural MRI (T1-MPRAGE), from 181 healthy younger (ages 18–34 y) and 120 older adults (ages 60–89 y). T2-FLAIR MRIs and behavioral assessments are available in a majority subset of over 250 participants. Behavioral assessments include fluid and crystallized cognition, self-reported measures of personality, and socioemotional functioning. Initial quality control and validation of these data is provided. This dataset will be of value to scientists interested in BOLD signal specifically isolated from ME-fMRI, individual differences in brain-behavioral associations, and cross-sectional aging effects in healthy adults. Demographic and behavioral data are available within the Open Science Framework project “Goal-Directed Cognition in Older and Younger Adults” (http://osf.io/yhzxe/), which will be augmented over time; neuroimaging data are available on OpenNeuro (https://openneuro.org/datasets/ds003592).
The over-reliance on the null hypothesis significance testing framework and its accompanying tools has recently been challenged. An example of such a tool is statistical power analysis, which is used to determine how many participants are required to detect a minimally meaningful effect size in the population at a given level of power and Type I error rate. To investigate how power analysis is currently used, this study reviews the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. We found that many pieces of information required for power analyses are not reported, and effect sizes selected for the procedure are often chosen based on an inappropriate rationale. In light of these findings, we argue that the power analysis procedure forces researchers to compromise in the selection of the different pieces of information required. We offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as precision-based power analysis or simply collecting the largest sample size possible.
Reporting and interpreting effect sizes (ESs) has been recommended by all major bodies within the field of psychology. In this systematic review, we investigated the reporting of effect sizes in six social-personality psychology journals from 2018, given that this area has been at the center of psychology's replication crisis. Our results highlight that although ES reporting is near perfect (even for follow-up tests), interpreting the magnitude of ESs, including confidence intervals for ESs, and interpreting the precision of the confidence intervals needs development.We also highlight widespread confusion regarding the interpretations of the magnitude of ESs within the context of the research.
Psychological research is rife with inappropriately concluding lack of association or no effect between a predictor and the outcome in regression models following statistically nonsignificant results. This approach is methodologically flawed, however, because failing to reject the null hypothesis using traditional, difference-based tests does not mean the null is true (i.e., no relationship). This flawed methodology leads to high rates of incorrect conclusions that flood the literature. This thesis introduces a novel, methodologically sound alternative. I demonstrate how equivalence testing can be applied to evaluate whether a predictor has negligible effects on the outcome variable in multiple regression. I constructed a simulation study to evaluate the performance (i.e., power and error rates) of two equivalence-based tests and compared it to the common, but inappropriate, method of concluding no effect by failing to reject the null hypothesis of the traditional test. I further propose two R functions to accompany this thesis and supply researchers with open-access and easy-to-use tools that they can flexibly adopt in their own research. The use of the proposed equivalence-based methods and R functions is then illustrated using examples from the literature, and recommendations for results reporting and interpretations are discussed. My results demonstrate that using tests of equivalence instead of the traditional test is the appropriate statistical choice: Tests of equivalence show high rates of correct conclusions, especially with larger sample sizes, and low rates of incorrect conclusions, whereas the traditional method demonstrates unacceptably high incorrect conclusion rates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.