Recently, various indicators have been proposed as indirect measures of nonresponse error in surveys. They employ auxiliary variables, external to the survey, to detect non-representative or unbalanced response. A class of designs known as adaptive survey designs maximizes these indicators by applying different treatments to different subgroups. The natural question is whether the decrease in non-response bias that is caused by adaptive survey designs could also be achieved by non-response adjustment methods. We discuss this question and provide theoretical and empirical considerations, supported by a range of household and business surveys. We find evidence that more balanced response coincides with less non-response bias, even after adjustment.
In recent literature on survey nonresponse, new indicators of the quality of the data collection have been proposed. These include indicators of balance and representativity (of the set of respondents) and distance (between respondents and nonrespondents), computed on available auxiliary variables. We use such indicators in conjunction with paradata from the Swedish CATI system to examine the inflow of data (as a function of the call attempt number) for the 2009 Swedish Living Conditions Survey (LCS). We then use the LCS 2009 data file to conduct several "experiments in retrospect". They consist in interventions, at suitable chosen points and driven by the prospects of improved balance and reduced distance. The survey estimates computed on the resulting final response set are likely to be less biased. Cost savings realized by fewer calls can be redirected to enhance quality of other aspects of the survey design.
In the design of surveys, a number of input parameters such as contact propensities, participation propensities, and costs per sample unit play a decisive role. In ongoing surveys, these survey design parameters are usually estimated from previous experience and updated gradually with new experience. In new surveys, these parameters are estimated from expert opinion and experience with similar surveys. Although survey institutes have fair expertise and experience, the postulation, estimation, and updating of survey design parameters is rarely done in a systematic way. This article presents a Bayesian framework to include and update prior knowledge and expert opinion about the parameters.
One objective of adaptive data collection is to secure a better balanced survey response. Methods exist for this purpose, including balancing with respect to selected auxiliary variables. Such variables are also used at the estimation stage for (calibrated) nonresponse weighting adjustment.Earlier research has shown that the use of auxiliary information at the estimation stage can reduce bias, perhaps considerably, but without eliminating it. The question is: would it have contributed further to bias reduction if, prior to estimation, that information had also been used in data collection, to secure a more balanced set of respondents? If the answer is yes, there is clear incentive, from the point of view of better accuracy in the estimates, to practice adaptive survey design, otherwise perhaps not.A key question is how the regression relationship between the survey variable and the auxiliary vector presents itself in the sample as opposed to the response. Strength in the relationship is helpful but is not the only consideration. The dilemma with nonresponse is one of inconsistent regression: a regression model appropriate for the sample often fails for the responding subset, because nonresponse is selective, non-random.In this article, we examine how nonresponse bias in survey estimates depends on regression inconsistency, both seen as functions of response imbalance. As a measure of bias we use the deviation of the calibration adjusted estimator from the unbiased estimate under full response. We study how the deviation and the regression inconsistency depend on the imbalance. We observe in empirical work that both can be reduced, to a degree, by efforts to reduce imbalance by an adaptive data collection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.