There is widespread public and academic interest in understanding the uses and effects of digital media. Scholars primarily use self-report measures of the quantity or duration of media use as proxies for more objective measures, but the validity of these self-reports remains unclear. Advancements in data collection techniques have produced a collection of studies indexing both self-reported and log-based measures. To assess the alignment between these measures, we conducted a meta-analysis of this research. Based on 106 effect sizes, we found that self-reported media use only moderately correlates with logged measurements, that self-reports were rarely an accurate reflection of logged media use, and that measures of problematic media use show an even smaller association with usage logs. These findings raise concerns about the validity of findings relying solely on self-reported measures of media use. The materials needed to reproduce the analysis and an article preprint are available at: https://osf.io/dhx48/.
In the last 10 years, many canonical findings in the social sciences appear unreliable. This so-called “replication crisis” has spurred calls for open science practices, which aim to increase the reproducibility, replicability, and generalizability of findings. Communication research is subject to many of the same challenges that have caused low replicability in other fields. As a result, we propose an agenda for adopting open science practices in Communication, which includes the following seven suggestions: (1) publish materials, data, and code; (2) preregister studies and submit registered reports; (3) conduct replications; (4) collaborate; (5) foster open science skills; (6) implement Transparency and Openness Promotion Guidelines; and (7) incentivize open science practices. Although in our agenda we focus mostly on quantitative research, we also reflect on open science practices relevant to qualitative research. We conclude by discussing potential objections and concerns associated with open science practices.
The influence of digital media on personal and social well-being is a question of immense public and academic interest. Scholars in this domain often use retrospective self-report measures of the quantity or duration of media use as a proxy for more objective measures, but the validity of these self-report measures remains unclear. Recent advancements in log-based data collection techniques have produced a growing collection of studies indexing both self-reported media use and device-logged measurements. Herein, we report a meta-analysis of this body of research. Based on 104 effect sizes, we found that self-reported media use was only moderately correlated with device-logged measurements, and that these self-report measures were rarely an accurate reflection of logged media use. These results demonstrate that self-reported measures of the quantity or duration of media use are not a valid index of the amount of time people actually spend using media. These findings have serious implications for the study of media use and well- being, suggesting that cautiousness is warranted in drawing conclusions regarding media effects from studies relying solely on self-reported measures of media use.
During the onset of the COVID-19 pandemic, the COVIDiSTRESS Consortium launched an open-access global survey to understand and improve individuals’ experiences related to the crisis. A year later, we extended this line of research by launching a new survey to address the dynamic landscape of the pandemic. This survey was released with the goal of addressing diversity, equity, and inclusion by working with over 150 researchers across the globe who collected data in 48 languages and dialects across 137 countries. The resulting cleaned dataset described here includes 15,740 of over 20,000 responses. The dataset allows cross-cultural study of psychological wellbeing and behaviours a year into the pandemic. It includes measures of stress, resilience, vaccine attitudes, trust in government and scientists, compliance, and information acquisition and misperceptions regarding COVID-19. Open-access raw and cleaned datasets with computed scores are available. Just as our initial COVIDiSTRESS dataset has facilitated government policy decisions regarding health crises, this dataset can be used by researchers and policy makers to inform research, decisions, and policy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.