A survey’s completion rate is one of its most important data quality measures. There are quite a few published studies examining web survey completion rate through experimental approaches. In this study, we expand the existing literature by examining the predictors of web survey completion rate using 25,080 real-world web surveys conducted by a single online panel. Our findings are consistent with the literature on some dimensions, such as finding a negative relationship between completion rate and survey length and question difficulty. Also, surveys without progress bars have higher completion rates than surveys with progress bars. This study also generates new insights into survey design features, such as the impact of the first question type and length on completion rate.
This study examines the use of trap questions as indicators of data quality in online surveys. Trap questions are intended to identify respondents who are not paying close attention to survey questions, which would mean that they are providing sub-optimal responses to not only the trap question itself but to other questions included in the survey. We conducted three experiments using an online non-probability panel. In the first experiment, we examine whether there is any difference in responses to surveys with one trap question as those that have two trap questions. In the second study, we examine responses to surveys with trap questions of varying difficulty. In the third experiment, we test the level of difficulty, the placement of the trap question, and other forms of attention checks. In all studies, we correlate the responses to the trap question(s) with other data quality checks, most of which were derived from the literature on satisficing. Also, we compare the responses to several substance questions by the response to the trap questions. This would tell us whether participants who failed the trap questions gave consistently different answers from those who passed the trap questions. We find that the rate of passing/failing various trap questions varies widely, from 27% to 87% among the types we tested. We also find evidence that some types of trap questions are more significantly correlated with other data quality measures.
While the choice of matrix versus item-by-item questions has received considerable attention in the literature, it is still unclear in what situation one is better than the other. Building upon the previous findings, this study expands this line of research by examining whether the difference between the two question types is moderated by the number of response options. Through a web survey experiment, this study compares matrix and item-by-item questions with 2, 3, 4, 5, 7, 9, and 11 response options. Additionally, we also investigate the impact of the device used to complete the survey on data quality. The results show that straight lining and response time are similar between the two question types across all response lengths, but item nonresponse tends to be higher for matrix than item-by-item question, especially among mobile respondents. Also measurement models reveal measurement equivalence between the two question types when there are fewer than seven response options. For matrices with 9 or 11 response options, analyses reveal substantial differences compared to item-by-item questions. Keywords matrix question, item-by-item question, web survey, survey experiment, data qualityAs more and more surveys are moving to online completion, be it by PC or mobile, survey researchers are striving to decrease the respondent burden without sacrificing data quality. One of the many important decisions in this sense is the use of grid questions. When asking multiple questions using the same set of response options, researchers often have two choices: either format them as item-byitem questions (asking them item by item) or group them into a matrix (also called grid) format and present them together. While the latter is more succinct, it can impact data quality (e.g., see Couper, Traugott, & Lamias, 2001).
Although Likert scales in agree-disagree (A/D) format are popular in surveys, the data quality yielded by them is controversial among researchers. Recognizing the measurement issues involved with the A/D format, researchers have developed other question formats to measure attitudes. In this study, we focused on an alternative question type, the item-specific (IS) question, which asks the respondent to choose an option that best describes his or her attitude. Using political efficacy items from the American National Election Studies (ANES), we compare extreme response style (ERS) between A/D and IS scales. Latent class factor analysis showed that ERS exists in both A/D and IS scale formats, but differs slightly across the two. Also, when analyzing ERS within subjects across two waves, there is only a single ERS for both question formats, after controlling for the correlation within respondents. The last finding suggests that ERS is a stable characteristic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.