In this article, we evaluate how the analysis of open-ended probes in an online cognitive interview can serve as a metric to identify cases that should be excluded due to disingenuous responses by ineligible respondents. We analyze data collected in 2019 via an online opt-in panel in English and Spanish to pretest a public opinion questionnaire (n = 265 in English and 199 in Spanish). We find that analyzing open-ended probes allowed us to flag cases completed by respondents who demonstrated problematic behaviors (e.g., answering many probes with repetitive textual patterns, by typing random characters, etc.), as well as to identify cases completed by ineligible respondents posing as eligible respondents (i.e., non-Spanish-speakers posing as Spanish-speakers). These findings indicate that data collected for multilingual pretesting research using online opt-in panels likely require additional evaluations of data quality. We find that open-ended probes can help determine which cases should be replaced when conducting pretesting using opt-in panels. We argue that open-ended probes in online cognitive interviews, while more time consuming and expensive to analyze than close-ended questions, serve as a valuable method of verifying response quality and respondent eligibility, particularly for researchers conducting multilingual surveys with online opt-in panels.
Survey researchers conducting pretesting via cognitive interviews or focus groups often use vignettes to evaluate questions and answer categories in order to identify possible measurement problems, particularly for reporting situations that are relatively rare or for sensitive topics. Although research has compared the performance of vignettes in Spanish and Asian languages in cognitive interviews, there is little research that compares the performance of vignettes across pretesting methodologies in languages other than English. To address this gap, we investigated the performance of a vignette about a homeownership question that was administered in focus groups and cognitive interviews conducted in seven languages: English, Spanish, Chinese, Korean, Vietnamese, Russian, and Arabic. We coded the cognitive interviews and focus group summaries to quantify the responses and the types of comprehension problems respondents had, and we compared the vignette response data from cognitive interviews to the data from focus groups, by language. We find that administering the vignette in cognitive interviews was more effective than administering the vignette in focus groups for uncovering difficulties respondents had with the survey question, particularly for Spanish-and Arabic-speakers. We conclude that using vignettes in focus groups without cognitive interviews may not reveal problems with survey questions as effectively. Regardless of methodology, the vignette task was challenging for certain language groups, and further research is needed on the cross-cultural adaptation of vignettes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.