Purpose Despite the quick spread of the use of mobile devices in survey participation, there is still little knowledge about the potentialities and challenges that arise from this increase. The purpose of this paper is to study how respondents’ preferences drive their choice of a certain device when participating in surveys. Furthermore, this paper evaluates the tolerance of participants when specifically asked to use mobile devices and carry out other specific tasks, such as taking photographs. Design/methodology/approach Data were collected by surveys in Spain, Portugal and Latin America by Netquest, an online fieldwork company. Findings Netquest panellists still mainly preferred to participate in surveys using personal computers. Nevertheless, the use of tablets and smartphones in surveys showed an increasing trend; more panellists would prefer mobile devices, if the questionnaires were adapted to them. Most respondents were not opposed to the idea of participating in tasks such as taking photographs or sharing GPS information. Research limitations/implications The research concerns an opt-in online panel that covers a specific area. For probability-based panels and other areas the findings may be different. Practical implications The findings show that online access panels need to adapt their surveys to mobile devices to satisfy the increasing demand from respondents. This will also allow new, and potentially very interesting data collection methods. Originality/value This study contributes to survey methodology with updated findings focusing on a currently underexplored area. Furthermore, it provides commercial online panels with useful information to determine their future strategies.
Surveys have been used as main tool of data collection in many areas of research and for many years. However, the environment is changing increasingly quickly, creating new challenges and opportunities. This article argues that, in this new context, human memory limitations lead to inaccurate results when using surveys in order to study objective online behavior: People cannot recall everything they did. It therefore investigates the possibility of using, in addition to survey data, passive data from a tracking application (called a ''meter'') installed on participants' devices to register their online behavior. After evaluating the extent of some of the main drawbacks linked to passive data collection with a case study (Netquest metered panel in Spain), this article shows that the data from the web survey and the meter lead to very different results about the online behavior of the same sample of respondents, showing the need to combine several sources of data collection in the future.
Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries about the quality of the data obtained. Nevertheless, not much is known about the link between response times (RTs) and quality. Therefore, the goal of this study is to look at the link between the RTs of respondents in an online survey and other more usual indicators of quality used in the literature: properly following an instructional manipulation check, coherence and precision of answers, absence of straight-lining, and so on. Besides, we are also interested in the link of RT and the quality indicators with respondents’ auto-evaluation of the efforts they did to answer the survey. Using a structural equation modeling approach that allows separating the structural and the measurement models and controlling for potential spurious effects, we find a significant relationship between RT and quality in the three countries studied. We also find a significant, but lower, relationship between RT and auto-evaluation. However, we did not find a significant link between auto-evaluation and quality.
Passive data from a tracking application (or “meter”) installed on participants’ devices to register the URLs visited have great potential for studying people’s online activities. However, given privacy concerns, obtaining cooperation installing a meter can be difficult and lead to selection bias. Therefore, in this article, we address three research questions: (1) To what extent are panelists willing to install a meter? (2) On which devices do they install the meter? (3) How do panelists who installed the meter differ from those who were invited but did not install it? Using data from online non-probability opt-in panels in nine countries, we found that the proportions of panelists installing the meter usually vary from 20% to 42%. Moreover, 20–25% of participants installed the meter on three or more devices. Finally, those who were invited but did not install the meter differ from those who did.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.