This article illustrates some effects of dynamic adaptive design in a large government survey. We present findings from the 2015 National Survey of College Graduates Adaptive Design Experiment, including results and discussion of sample representativeness, response rates, and cost. We also consider the effect of truncating data collection (examining alternative stopping rules) on these metrics. In this experiment, we monitored sample representativeness continuously and altered data collection procedures—increasing or decreasing contact effort—to improve it. Cases that were overrepresented in the achieved sample were assigned to more passive modes of data collection (web or paper) or withheld from the group of cases that received survey reminders, whereas underrepresented cases were assigned to telephone follow-ups. The findings suggest that a dynamic adaptive survey design can improve a data quality indicator (R-indicators) without increasing cost or reducing response rate. We also find that a dynamic adaptive survey design has the potential to reduce the length of the data collection period, control cost, and increase timeliness of data delivery, if sample representativeness is prioritized over increasing the survey response rate.
In the United States, government surveys’ refusal rates have been increasing at an alarming rate, despite traditional measures for mitigating nonresponse. Given this phenomenon, now is a good time to revisit the work of Harris-Kojetin and Tucker (1999). In that study, the authors explored the relation between economic and political conditions on Current Population Survey (CPS) refusal rates over the period 1960–1988.They found evidence that economic and political factors are associated with survey refusals and acknowledged the need to extend this work as more data became available. In this study, our aim was to continue their analysis. First, we replicated their findings. Next, we ran the assumed underlying model on an extended time-period (1960–2015). Last, since we found that the model was not an ideal fit for the extended period, we revised it using available time series and incorporating information about the CPS sample design. In the extended, refined model, presidential approval, census year, number of jobs and not-in-labor-force rate were all significant predictors of survey refusal.
Summary
Adaptive designs involve preplanned rules for modifying an on‐going study based on accruing data. We compare the goals and methods of adaptation for trials and surveys, identify similarities and differences, and make recommendations for what types of adaptive approaches from one domain have high potential to be useful in the other. For example, clinical trials could benefit from recently developed survey methods for monitoring which groups have low response rates and intervening to fix this. Clinical trials may also benefit from more formal identification of the target population, and from using paradata (contextual information collected before or during the collection of actual outcomes) to predict participant compliance and retention and then to intervene to improve these. Surveys could benefit from stopping rules based on information monitoring, applying techniques from sequential multiple‐assignment randomized trial designs to improve response rates, prespecifying a formal adaptation protocol and including a data monitoring committee. We conclude with a discussion of the additional information, infrastructure and statistical analysis methods that are needed when conducting adaptive designs, as well as benefits and risks of adaptation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.