Interview duration is an important variable in web surveys because it is a direct measure of the response burden. In this article, we analyze the effects of the survey design, respondent characteristics, and the interaction between these effects on interview duration. For that purpose, we applied multilevel analysis to a data set of 21 web surveys on political attitudes and behavior. Our results showed that factors on both levels, the individual and the survey level, had effects on interview duration. However, the larger share of the variation in interview duration is explained by the characteristics of the respondents. In this respect, we illustrate the impact of mobile devices and panel recruitment on interview duration. In addition, we found important relationships between the respondents' attitudes and how a web survey is designed: Highly motivated respondents spent significantly more time answering cognitively demanding questions than less motivated respondents. When planning a survey, not only the number and formats of questions need to be taken into account but also the expected sample composition and how the participants will respond to the design of the web survey.
This article addresses the questions of whether paradata can help us to improve the models of panel attrition and whether paradata can improve the effectiveness of propensity score weights with respect to reducing attrition biases. The main advantage of paradata is that it is collected as a byproduct of the survey process. However, it is still an open question which paradata can be used to model attrition and to what extent these paradata are correlated with the variables of interest. Our analysis used data from a seven-wave web-based panel survey that had been supplemented by three cross-sectional surveys. This split panel design allowed us to assess the magnitude of attrition bias for a large number of substantive variables. Furthermore, this design enabled us to analyze in detail the effectiveness of propensity score weights. Our results showed that some paradata (e.g., response times and participation history) improved the prediction of panel attrition, whereas others did not. In addition, not all the paradata that increased the model fit resulted in weights that effectively reduced bias. These findings highlight the importance of selecting paradata that are linked to both the survey response process and the variables of interest. This article provides a first contribution to this challenge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.