Online data collection methods are expanding the ease and access of developmental research for researchers and participants alike. While its popularity among developmental scientists has soared during the COVID-19 pandemic, its potential goes beyond just a means for safe, socially distanced data collection. In particular, advances in video conferencing software has enabled researchers to engage in face-to-face interactions with participants from nearly any location at any time. Due to the novelty of these methods, however, many researchers still remain uncertain about the differences in available approaches as well as the validity of online methods more broadly. In this article, we aim to address both issues with a focus on moderated (synchronous) data collected using video-conferencing software (e.g., Zoom). First, we review existing approaches for designing and executing moderated online studies with young children. We also present concrete examples of studies that implemented choice and verbal measures (Studies 1 and 2) and looking time (Studies 3 and 4) across both in-person and online moderated data collection methods. Direct comparison of the two methods within each study as well as a meta-analysis of all studies suggest that the results from the two methods are comparable, providing empirical support for the validity of moderated online data collection. Finally, we discuss current limitations of online data collection and possible solutions, as well as its potential to increase the accessibility, diversity, and replicability of developmental science.
How do we learn about who is good at what? Others’ competence is unobservable and often must be inferred from observable evidence, such as failures and successes. However, even the same performance can indicate different levels of competence depending on the context, and objective evaluation metrics are not always available. Building on recent advances on children's use of emotion as information, here we ask whether expressions of surprise inform inferences about competence. Participants saw scenarios (sports, academics) where two students achieved identical outcomes but a teacher showed surprise to one student and no surprise to the other. In Exp.1, adults inferred that the successful student who elicited the teacher’s surprise was less competent than the other student, but this pattern reversed when both students failed. Exp.2 (4-9-year-olds) finds initial evidence for such inferences in school-aged children. These findings have implications for promoting healthy social comparisons and preventing acquisition of negative stereotypes from non-verbal cues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.