We greatly appreciate the care and thought that is evident in the 10 commentaries that discuss our debate paper, the majority of which argued in favor of a formalized ICD-11 gaming disorder. We agree that there are some people whose play of video games is related to life problems. We believe that understanding this population and the nature and severity of the problems they experience should be a focus area for future research. However, moving from research construct to formal disorder requires a much stronger evidence base than we currently have. The burden of evidence and the clinical utility should be extremely high, because there is a genuine risk of abuse of diagnoses. We provide suggestions about the level of evidence that might be required: transparent and preregistered studies, a better demarcation of the subject area that includes a rationale for focusing on gaming particularly versus a more general behavioral addictions concept, the exploration of non-addiction approaches, and the unbiased exploration of clinical approaches that treat potentially underlying issues, such as depressive mood or social anxiety first. We acknowledge there could be benefits to formalizing gaming disorder, many of which were highlighted by colleagues in their commentaries, but we think they do not yet outweigh the wider societal and public health risks involved. Given the gravity of diagnostic classification and its wider societal impact, we urge our colleagues at the WHO to err on the side of caution for now and postpone the formalization.
Violence in digital games has been a source of controversy in the scientific community and general public. Over two decades of research have examined this issue. However, much of this research has been undercut by methodological limitations and ideological statements that go beyond what scientific evidence could support. We review 25 years of experimental, cross-sectional, longitudinal, and meta-analytical research in this field. Empirical evidence regarding the impact of violent digital games on player aggression is, at best, mixed and cannot support unambiguous claims that such games are harmful or represent a public health crisis. Rather, indulgence in such claims risked damage to the credibility of games effects research, credibility which can only be restored through better empirical research and more conservative and careful statements by scholars. We encourage the field to engage in a responsible dialog and constructive debate that could continue to be enriching and invigorating.
Societies invest in scientific studies to better understand the world, and attempt to harness such improved understanding to address pressing societal problems. Published research, however, can only be useful for theory or application if it is credible. In science, a credible finding is one that has repeatedly survived risky falsification attempts. However, state-of-theart meta-analytic approaches cannot determine the credibility of an effect because they do not account for the extent to which each included study has survived such attempted falsification. To overcome this problem, the following paper outlines a unified framework to estimate the credibility of published research by examining four fundamental falsifiability-related dimensions: (1) method/data transparency, (2) analytic reproducibility, (3) analytic robustness, and (4) effect replicability. A standardized workflow is proposed to quantify the degree to which a finding has survived scrutiny along these four credibility facets. The framework is demonstrated by applying it to published replications in the psychology literature. A web platform implementation of the framework is outlined, and we conclude by encouraging the community of researchers to contribute to the development and crowdsourcing of the platform.
The competitive reaction time task (CRTT) is the measure of aggressive behavior most commonly used in laboratory research. However, the test has been criticized for issues in standardization because there are many different test procedures and at least 13 variants to calculate a score for aggressive behavior. We compared the different published analyses of the CRTT using data from 3 different studies to scrutinize whether it would yield the same results. The comparisons revealed large differences in significance levels and effect sizes between analysis procedures, suggesting that the unstandardized use and analysis of the CRTT have substantial impacts on the results obtained, as well as their interpretations. Based on the outcome of our comparisons, we provide suggestions on how to address some of the issues associated with the CRTT, as well as a guideline for researchers studying aggressive behavior in the laboratory.
Societies invest in scientific studies to better understand the world, and attempt to harness such improved understanding to address pressing societal problems. Published research, however, can only be useful for theory or application if it is credible. In science, a credible finding is one that has repeatedly survived risky falsification attempts. However, state-of-the-art meta-analytic approaches cannot determine the credibility of an effect because they do not account for the extent to which each included study has survived such attempted falsification. To overcome this problem, the following paper outlines a unified framework to estimate the credibility of published research by examining four fundamental falsifiability-related dimensions: (1) method/data transparency, (2) analytic reproducibility, (3) analytic robustness, and (4) effect replicability. A standardized workflow is proposed to quantify the degree to which a finding has survived scrutiny along these four credibility facets. The framework is demonstrated by applying it to published replications in the psychology literature. Details about a web platform implementation of the framework are outlined, and we conclude by encouraging the community of researchers to contribute to the development and crowdsourcing of the platform.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.