Critics of public opinion polls often claim that methodological shortcuts taken to collect timely data produce biased results. This study compares two random digit dial national telephone surveys that used identical questionnaires but very different levels of effort: a "Standard" survey conducted over a 5-day period that used a sample of adults who were home when the interviewer called, and a "Rigorous" survey conducted over an 8-week period that used random selection from among all adult household members. Response rates, computed according to AAPOR guidelines, were 60.6 percent for the Rigorous and 36.0 percent for the Standard study. Nonetheless, the two surveys produced similar results. Across 91 comparisons, no difference exceeded 9 percentage points, and the average difference was about 2 percentage points. Most of the statistically significant differences were among demographic items. Very few significant differences were found on attention to media and engagement in politics, social trust and connectedness, and most social and political attitudes, including even those toward surveys.
From 1979 to 1996, the Survey of Consumer Attitudes response rate remained roughly 70 percent. But number of calls to complete an interview and proportion of interviews requiring refusal conversion doubled. Using call-record histories, we explore what the consequences of lower response rates would have been if these additional efforts had not been undertaken. Both number of calls and initially cooperating (vs. initially refusing) are related to the Index of Consumer Sentiment (ICS), but only number of calls survives a control for demographic characteristics. We assess the impact of excluding respondents who required refusal conversion (which reduces the response rate 5-10 percentage points), respondents who required more than five calls to complete the interview (reducing the response rate about 25 percentage points), and those who required more than two calls (a reduction of about 50 percentage points). We found no effect of excluding any of these respondent groups on cross-sectional estimates of the ICS using monthly samples of hundreds of cases. For yearly estimates, based on thousands of cases, the exclusion of respondents who required more calls (though not of initial refusers) had an effect, but a very small one. One of the exclusions generally affected estimates of change over time in the ICS, irrespective of sample size.A basic tenet of survey research is that high response rates are better than low ones. Indeed, a low response rate is one of the few outcomes or features that-taken by itself-is considered a major threat to the usefulness of a survey.
The lengthy history and extended periods of relative design stability of the University of Michigan's Survey of Consumer Attitudes (SCA) make it an important resource for documenting response rate changes over the better part of survey research's history.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.