Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, much of this work has not been tailored specifically to psychology, so it is not clear which methods work best for data typically seen in our field. Here, we present a comprehensive simulation study to examine how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We created such scenarios by simulating several levels of questionable research practices, publication bias, heterogeneity, and using study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses-that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change based on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts on improving the primary literature and conducting large-scale, pre-registered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at
BackgroundMeta-analyses play an important role in cumulative science by combining information across multiple studies and attempting to provide effect size estimates corrected for publication bias. Research on the reproducibility of meta-analyses reveals that errors are common, and the percentage of effect size calculations that cannot be reproduced is much higher than is desirable. Furthermore, the flexibility in inclusion criteria when performing a meta-analysis, combined with the many conflicting conclusions drawn by meta-analyses of the same set of studies performed by different researchers, has led some people to doubt whether meta-analyses can provide objective conclusions.DiscussionThe present article highlights the need to improve the reproducibility of meta-analyses to facilitate the identification of errors, allow researchers to examine the impact of subjective choices such as inclusion criteria, and update the meta-analysis after several years. Reproducibility can be improved by applying standardized reporting guidelines and sharing all meta-analytic data underlying the meta-analysis, including quotes from articles to specify how effect sizes were calculated. Pre-registration of the research protocol (which can be peer-reviewed using novel ‘registered report’ formats) can be used to distinguish a-priori analysis plans from data-driven choices, and reduce the amount of criticism after the results are known.SummaryThe recommendations put forward in this article aim to improve the reproducibility of meta-analyses. In addition, they have the benefit of “future-proofing” meta-analyses by allowing the shared data to be re-analyzed as new theoretical viewpoints emerge or as novel statistical techniques are developed. Adoption of these practices will lead to increased credibility of meta-analytic conclusions, and facilitate cumulative scientific knowledge.
Individual discounting rates for different types of delayed reward are typically assumed to reflect a single, underlying trait of impulsivity. Recently, we showed that discounting rates are orders of magnitude steeper for directly consumable liquid rewards than for monetary rewards (Jimura et al. 2009), raising the question of whether discounting rates for different types of reward covary at the individual level. Accordingly, the present study examined the relation between discounting of hypothetical money and real liquid rewards in young adults (Experiment 1) and older adults (Experiment 2). At the group level, young adults discounted monetary rewards more steeply than the older adults, but the reverse pattern was observed with liquid rewards. At the individual level, the rates at which young and older participants discounted each reward type were stable over a two- to fifteen-week interval (rs >.70), but there was no significant correlation between the rates at which they discounted the two reward types. These results suggest that although similar decision-making processes may underlie the discounting of different types of rewards, the rates at which individuals discount money and directly consumable rewards may reflect separate, stable traits, rather than a single trait of impulsivity.
In previous studies, researchers have found that humans discount delayed rewards orders of magnitude less steeply than do other animals. Humans also discount smaller delayed reward amounts more steeply than larger amounts, whereas animals apparently do not. These differences between humans and animals might reflect differences in the types of rewards studied and/or the fact that animals actually had to wait for their rewards. In the present article, we report the results of three experiments in which people made choices involving liquid rewards delivered and consumed after actual delays, thereby bridging the gap between animal and human studies. Under these circumstances, humans, like animals, discounted the value of rewards delayed by seconds; however, unlike animals, they still showed an effect of reward amount. Human discounting was well described by the same hyperboloid function that has previously been shown to describe animal discounting of delayed food and water rewards, as well as human discounting of real and hypothetical monetary rewards.
A new measure of individual habits and preferences in video game use is developed in order to better study the risk factors of pathological game use (i.e., excessively frequent or prolonged use, sometimes called “game addiction”). This measure was distributed to internet message boards for game enthusiasts and to college undergraduates. An exploratory factor analysis identified 9 factors: Story, Violent Catharsis, Violent Reward, Social Interaction, Escapism, Loss-Sensitivity, Customization, Grinding, and Autonomy. These factors demonstrated excellent fit in a subsequent confirmatory factor analysis, and, importantly, were found to reliably discriminate between inter-individual game preferences (e.g., Super Mario Brothers as compared to Call of Duty). Moreover, three factors were significantly related to pathological game use: the use of games to escape daily life, the use of games as a social outlet, and positive attitudes toward the steady accumulation of in-game rewards. The current research identifies individual preferences and motives relevant to understanding video game players' evaluations of different games and risk factors for pathological video game use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.