2014
DOI: 10.1177/0894439314531214
|View full text |Cite
|
Sign up to set email alerts
|

What are the Links in a Web Survey Among Response Time, Quality, and Auto-Evaluation of the Efforts Done?

Abstract: Evaluating the quality of the data is a key preoccupation for researchers to be confident in their results. When web surveys are used, it seems even more crucial since the researchers have less control on the data collection process. However, they also have the possibility to collect some paradata that may help evaluating the quality. Using this paradata, it was noticed that some respondents of web panels are spending much less time than expected to complete the surveys. This creates worries about the quality … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
32
0
3

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 59 publications
(36 citation statements)
references
References 15 publications
(19 reference statements)
1
32
0
3
Order By: Relevance
“…The original national sample of U.S. adults consisted of 6357 participants. Drawing from previous research on enhancing data quality in online survey research by screening for inattentive respondents and “speedy” completion times, participants were excluded if they wrongly answered “red herring” questions [ 40 , 41 , 42 ] (609 participants excluded). Relatedly, participants were also excluded if their response time was less than 6.5 min (1443 participants excluded based on the quartile), indicating that they did not answer the questions carefully [ 41 , 42 ].…”
Section: Methodsmentioning
confidence: 99%
“…The original national sample of U.S. adults consisted of 6357 participants. Drawing from previous research on enhancing data quality in online survey research by screening for inattentive respondents and “speedy” completion times, participants were excluded if they wrongly answered “red herring” questions [ 40 , 41 , 42 ] (609 participants excluded). Relatedly, participants were also excluded if their response time was less than 6.5 min (1443 participants excluded based on the quartile), indicating that they did not answer the questions carefully [ 41 , 42 ].…”
Section: Methodsmentioning
confidence: 99%
“…An IMC presents typical survey items (e.g., Likert scale, check boxes) but also includes an instruction to ignore them and instead choose an atypical response to indicate that respondents are paying close attention. The original study reported that more than 30% of university students in the US failed the IMC (Oppenheimer et al, 2009 ), whereas other studies have revealed large variance across countries: 19% in Canada (Mandel, 2014 ), more than 50% in Japan (Miura and Kobayashi, 2015 ), and more than 70% in Spain, Mexico, and Columbia (Revilla and Ochoa, 2015 ); this indicates an urgent need to address the issue of satisficing in online experiments.…”
Section: Introductionmentioning
confidence: 99%
“…At the item level, a shorter response time has been found to be related to worse quality, represented by more straightlining (Zhang & Conrad, 2014), that is, choosing the identical response in a grid. The link between shorter response time and worse data quality was also reported by Revilla and Ochoa (2015) who used more quality indicators. At the questionnaire level, in Malhotra (2008), the group of low-educated respondents with shorter interview durations was found to be most likely to have satisficing behavior (i.e., bias toward selecting the earlier choices).…”
Section: Introductionmentioning
confidence: 67%