No abstract
Non-response weighting is a commonly used method to adjust for bias due to unit nonresponse in surveys. Theory and simulations show that, to reduce bias effectively without increasing variance, a covariate that is used for non-response weighting adjustment needs to be highly associated with both the response indicator and the survey outcome variable. In practice, these requirements pose a challenge that is often overlooked, because those covariates are often not observed or may not exist. Surveys have recently begun to collect supplementary data, such as interviewer observations and other proxy measures of key survey outcome variables. To the extent that these auxiliary variables are highly correlated with the actual outcomes, these variables are promising candidates for non-response adjustment. In the present study, we examine traditional covariates and new auxiliary variables for the National Survey of Family Growth, the Medical Expenditure Panel Survey, the American National Election Survey, the European Social 389 390 K r e u t e r e t a l . 1 7 3 ( 2 0 1 0 ) Surveys and the University of Michigan Transportation Research Institute survey. We provide empirical estimates of the association between proxy measures and response to the survey request as well as the actual survey outcome variables. We also compare unweighted and weighted estimates under various non-response models. Our results from multiple surveys with multiple recruitment protocols from multiple organizations on multiple topics show the difficulty of finding suitable covariates for non-response adjustment and the need to improve the quality of auxiliary data. i n J o u r n a l o f t h e ro y a l S t a t i S t i c a l S o c i e t y a
Nonresponse is a prominent problem in sample surveys. At face value, it reduces the trust in survey estimates. Nonresponse undermines the probability-based inferential mechanism and introduces the potential for nonresponse bias. In addition, there are other important consequences. The effort to limit increasing nonresponse has led to higher survey costs—allocation of greater resources to measure and reduce nonresponse. Nonresponse has also led to greater survey complexity in terms of design, implementation, and processing of survey data, such as the use of multiphase and responsive designs. The use of mixed-mode and multiframe designs to address nonresponse increases complexity but also introduces other sources of error. Surveys have to rely to a greater extent on statistical adjustments and auxiliary data. This article describes the major consequences of survey nonresponse, with particular attention to recent years.
Traditional statistical analyses of interviewer effects on survey data do not examine whether these effects change over a field period. However, the nature of the survey interview is dynamic. Interviewers' behaviors and perceptions may evolve as they gain experience, thus potentially affecting data quality. This paper looks at how interview length and interviewer evaluations of respondents change over interviewers' workloads. Multilevel models with random interviewer effects are used to account for the clustering of cases within interviewers and individual interviewer characteristics in the 1984, 1988, and 2000 National Election Studies. The 1984 and 1988 NES released sample in four replicates, minimizing the confound between order in an interviewers' workload and sample composition. We find that over the course of the studies, both measures change significantly. Interviewer prior survey experience also was significantly negatively related to the length of the interview. These findings have implications for interviewer training prior to and during studies, as well as suggesting future research to reveal why these behaviors and perceptions change.
Background Attrition, or dropout, is a problem faced by many online health interventions, potentially threatening the inferential value of online randomized controlled trials.Objective In the context of a randomized controlled trial of an online weight management intervention, where 85% of the baseline participants were lost to follow-up at the 12-month measurement, the objective was to examine the effect of nonresponse on key outcomes and explore ways to reduce attrition in follow-up surveys.Methods A sample of 700 nonrespondents to the 12-month online follow-up survey was randomly assigned to a mail or telephone nonresponse follow-up survey. We examined response rates in the two groups, costs of follow-up, reasons for nonresponse, and mode effects. We ran several logistic regression models, predicting response or nonresponse to the 12-month online survey as well as predicting response or nonresponse to the follow-up survey.ResultsWe analyzed 210 follow-up respondents in the mail and 170 in the telephone group. Response rates of 59% and 55% were obtained for the telephone and mail nonresponse follow-up surveys, respectively. A total of 197 respondents (51.8%) gave reasons related to technical issues or email as a means of communication, with older people more likely to give technical reasons for noncompletion; 144 (37.9%) gave reasons related to the intervention or the survey itself. Mail follow-up was substantially cheaper: We estimate that the telephone survey cost about US $34 per sampled case, compared to US $15 for the mail survey. The telephone responses were subject to possible social desirability effects, with the telephone respondents reporting significantly greater weight loss than the mail respondents. The respondents to the nonresponse follow-up did not differ significantly from the 12-month online respondents on key outcome variables.ConclusionsMail is an effective way to reduce attrition to online surveys, while telephone follow-up might lead to overestimating the weight loss for both the treatment and control groups. Nonresponse bias does not appear to be a significant factor in the conclusions drawn from the randomized controlled trial.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.