2000
DOI: 10.1086/317759
|View full text |Cite
|
Sign up to set email alerts
|

Consequences of Reducing Nonresponse in a National Telephone Survey

Abstract: Critics of public opinion polls often claim that methodological shortcuts taken to collect timely data produce biased results. This study compares two random digit dial national telephone surveys that used identical questionnaires but very different levels of effort: a "Standard" survey conducted over a 5-day period that used a sample of adults who were home when the interviewer called, and a "Rigorous" survey conducted over an 8-week period that used random selection from among all adult household members. Re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

42
471
5
8

Year Published

2003
2003
2011
2011

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 829 publications
(526 citation statements)
references
References 12 publications
42
471
5
8
Order By: Relevance
“…A great deal of research has explored the impact of nonresponse on telephone survey results by assessing whether respondents and nonrespondents differ from one another (see Groves and Couper 12 1998 for a review). This has been done by (1) conducting a follow-up survey to interview people who did not respond to the initial survey (e.g., Massey, Barker, and Hsiung 1981), (2) comparing the wave-one characteristics of respondents who were and were not lost at follow-up waves of interviewing in panel studies (e.g., Schejbal and Lavrakas 1995), (3) comparing early vs. late responders to survey requests (under the assumption that late responders are more similar to nonresponders than early responders; e.g., Merkle, Bauman, and Lavrakas 1993), (4) comparing people who refuse an initial survey request to those who never refuse (e.g., O 'Neil 1979;Retzer, Schipani, and Cho 2004), (5) using archival records to compare the personal and/or community characteristics of households that do and do not respond to survey requests (e.g., Groves and Couper 1998), and (6) comparing the characteristics of respondents in an RDD survey sample to those of the population as a whole (e.g., Keeter et al 2000;Mulry-Liggan 1983).…”
Section: Effects Of Response Rates On the Accuracy Of Survey Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…A great deal of research has explored the impact of nonresponse on telephone survey results by assessing whether respondents and nonrespondents differ from one another (see Groves and Couper 12 1998 for a review). This has been done by (1) conducting a follow-up survey to interview people who did not respond to the initial survey (e.g., Massey, Barker, and Hsiung 1981), (2) comparing the wave-one characteristics of respondents who were and were not lost at follow-up waves of interviewing in panel studies (e.g., Schejbal and Lavrakas 1995), (3) comparing early vs. late responders to survey requests (under the assumption that late responders are more similar to nonresponders than early responders; e.g., Merkle, Bauman, and Lavrakas 1993), (4) comparing people who refuse an initial survey request to those who never refuse (e.g., O 'Neil 1979;Retzer, Schipani, and Cho 2004), (5) using archival records to compare the personal and/or community characteristics of households that do and do not respond to survey requests (e.g., Groves and Couper 1998), and (6) comparing the characteristics of respondents in an RDD survey sample to those of the population as a whole (e.g., Keeter et al 2000;Mulry-Liggan 1983).…”
Section: Effects Of Response Rates On the Accuracy Of Survey Resultsmentioning
confidence: 99%
“…Some of the most direct evidence about nonresponse bias comes from research comparing responses from similar surveys that achieved different response rates (e.g., Groves, Presser, and Dipko 2004;Keeter, Miller, Kohut, Groves, and Presser 2000;13 Traugott, Groves, and Lepkowski 1987). For example, Keeter et al (2000) varied the amount of effort put into obtaining high response rates in two surveys with identical survey questionnaires by manipulating the field period length, extent of refusal conversion attempts, and number of call attempts.…”
Section: Effects Of Response Rates On the Accuracy Of Survey Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…This finding implies that if we want to include information about these countries, the results may be biased. In addition, debate is growing that low response is less problematic in affecting survey estimates than previously assumed ( 31 ). Nevertheless, the prevalence rates of self-medication in countries with low response rates should be considered as a rough estimate and interpreted as an indication that the problem exists.…”
Section: Discussionmentioning
confidence: 99%