There is widespread public and academic interest in understanding the uses and effects of digital media. Scholars primarily use self-report measures of the quantity or duration of media use as proxies for more objective measures, but the validity of these self-reports remains unclear. Advancements in data collection techniques have produced a collection of studies indexing both self-reported and log-based measures. To assess the alignment between these measures, we conducted a meta-analysis of this research. Based on 106 effect sizes, we found that self-reported media use only moderately correlates with logged measurements, that self-reports were rarely an accurate reflection of logged media use, and that measures of problematic media use show an even smaller association with usage logs. These findings raise concerns about the validity of findings relying solely on self-reported measures of media use. The materials needed to reproduce the analysis and an article preprint are available at: https://osf.io/dhx48/.
Understanding how people use technology remains important, particularly when measuring the impact this might have on individuals and society. However, despite a growing body of resources that can quantify smartphone use, research within psychology and social science overwhelmingly relies on self-reported assessments. These have yet to convincingly demonstrate an ability to predict objective behavior. Here, and for the first time, we compare a variety of smartphone use and 'addiction' scales with objective behaviors derived from Apple's Screen Time application. While correlations between psychometric scales and objective behavior are generally poor, single estimates and measures that attempt to frame technology use as habitual rather than 'addictive' correlate more favorably with subsequent behavior. We conclude that existing self-report instruments are unlikely to be sensitive enough to accurately predict basic technology use related behaviors. As a result, conclusions regarding the psychological impact of technology are unreliable when relying solely on these measures to quantify typical usage.
Understanding how people use technology remains important, particularly when measuring the impact this might have on individuals and society. However, despite a growing body of resources that can quantify smartphone use, research within psychology and social science overwhelmingly relies on self-reported assessments. These have yet to convincingly demonstrate an ability to predict objective behavior. Here, and for the first time, we compare a variety of smartphone use and ‘addiction’ scales with objective behaviors derived from Apple’s Screen Time application. While correlations between psychometric scales and objective behavior are generally poor, single estimates and measures that attempt to frame technology use as habitual rather than ‘addictive’ correlate more favorably with subsequent behavior. We conclude that existing self-report instruments are unlikely to be sensitive enough to accurately predict basic technology use related behaviors. As a result, conclusions regarding the psychological impact of technology are unreliable when relying solely on these measures to quantify typical usage.
In the last 10 years, many canonical findings in the social sciences appear unreliable. This so-called “replication crisis” has spurred calls for open science practices, which aim to increase the reproducibility, replicability, and generalizability of findings. Communication research is subject to many of the same challenges that have caused low replicability in other fields. As a result, we propose an agenda for adopting open science practices in Communication, which includes the following seven suggestions: (1) publish materials, data, and code; (2) preregister studies and submit registered reports; (3) conduct replications; (4) collaborate; (5) foster open science skills; (6) implement Transparency and Openness Promotion Guidelines; and (7) incentivize open science practices. Although in our agenda we focus mostly on quantitative research, we also reflect on open science practices relevant to qualitative research. We conclude by discussing potential objections and concerns associated with open science practices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.