2021
DOI: 10.1038/s41562-021-01117-5
|View full text |Cite
|
Sign up to set email alerts
|

A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use

Abstract: There is widespread public and academic interest in understanding the uses and effects of digital media. Scholars primarily use self-report measures of the quantity or duration of media use as proxies for more objective measures, but the validity of these self-reports remains unclear. Advancements in data collection techniques have produced a collection of studies indexing both self-reported and log-based measures. To assess the alignment between these measures, we conducted a meta-analysis of this research. B… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

9
223
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 438 publications
(281 citation statements)
references
References 108 publications
9
223
1
Order By: Relevance
“…Following the general trend in social psychology of using estimates of behavior as a proxy for actual behavioral measures (Sassenberg & Ditrich, 2019), most studies, however, rely on self-reported measures of DMU (Griffioen et al, 2020). When compared to more objective measures of DMU (i.e., digital trace data or device usage logs), such estimates are generally inaccurate (Parry et al, 2021). Crucially, evidence suggests that the error in self-reported DMU is likely biased systematically by factors that are fundamental to the effect being investigated: Respondents' volume of use (Araujo et al, 2017;Boase & Ling, 2013;Ernala et al, 2020;Scharkow, 2016;Vanden Abeele et al, 2013) and level of depression (Sewall et al, 2020).…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…Following the general trend in social psychology of using estimates of behavior as a proxy for actual behavioral measures (Sassenberg & Ditrich, 2019), most studies, however, rely on self-reported measures of DMU (Griffioen et al, 2020). When compared to more objective measures of DMU (i.e., digital trace data or device usage logs), such estimates are generally inaccurate (Parry et al, 2021). Crucially, evidence suggests that the error in self-reported DMU is likely biased systematically by factors that are fundamental to the effect being investigated: Respondents' volume of use (Araujo et al, 2017;Boase & Ling, 2013;Ernala et al, 2020;Scharkow, 2016;Vanden Abeele et al, 2013) and level of depression (Sewall et al, 2020).…”
mentioning
confidence: 99%
“…Although self-reported estimates are prevalent in studies of DMU, there is strong evidence that these measures do not capture what they are intended to measure: actual use (Parry et al, 2021). Rather, as is common with self-reports of behavior in many domains (see, e.g., Jenner et al, 2006;Kormos & Gifford, 2014), self-report measures of DMU capture respondents' perceptions of their use rather than their actual use (Scharkow, 2016;Sewall et al, 2020).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…As the pandemic situation has naturally impeded data collection from human subjects, much research has relied on survey data 57 . By combining objective tracking of activity, sleep and phone data, with participants' self-report through ecological momentary assessment, the current study provides longitudinal insights, while minimizing reporting biases associated with (retrospective) surveys responses 14,[58][59][60][61]…”
Section: Discussionmentioning
confidence: 99%
“…However, self-reports of mobile media use have faced criticism for being prone to a number of biases. For example, self-reports of constructs that have verifiable answers (e.g., time spent on smartphones) have been found to be fairly inaccurate, if not systematically biased (e.g., confounded with well-being and mental health) (Parry et al, 2021). For self-reports of other constructs (e.g., habitual phone use), it can still be challenging to determine validity, especially when measured in a cross-sectional survey (Ohme, Albaek, & de Vreese, 2016).…”
Section: Methodological Approaches and Advancesmentioning
confidence: 99%