As survey methods evolve, researchers require a comprehensive understanding of the error sources in their data. Comparative studies, which assess differences between the estimates from emerging survey methods and those from traditional surveys, are a popular tool for evaluating total error; however, they do not provide insight on the contributing error sources themselves. The Total Survey Error (TSE) framework is a natural fit for evaluations that examine survey error components across multiple data sources. In this article, we present a case study that demonstrates how the TSE framework can support both qualitative and quantitative evaluations comparing probability and nonprobability surveys. Our case study focuses on five internet panels that are intended to represent the US population and are used to measure health statistics. For these panels, we analyze the total survey error in two ways: (1) using a qualitative assessment that describes how panel construction and management methods may introduce error and (2) using a quantitative assessment that estimates and partitions the total error for two probability-based panels into coverage error and nonresponse error. This work can serve as a “proof of concept” for how the TSE framework may be applied to understand and compare the error structure of probability and nonprobability surveys. For those working specifically with internet panels, our findings will further provide an example of how researchers may choose the panel option best suited to their study aims and help vendors prioritize areas of improvement.
Short message service (SMS or text messaging) surveys can collect data quickly and inexpensively. However, SMS surveys have space constraints that pose difficulties to questionnaire designers: Questions can only be 160 characters or less, surveys must be short, and many respondents use phones with small screens. In this article, we compare SMS and benchmark face-to-face surveys across four African countries to address three common questions in SMS questionnaire design. First, are multiple response questions feasible? Second, is it necessary to randomize response option order? Third, should “don’t know” response options be included? Our results provide practical guidance about how to improve data quality with this emerging mode of data collection.
Nonresponse rates have been growing over time leading to concerns about survey data quality. Adaptive designs seek to allocate scarce resources by targeting specific subsets of sampled units for additional effort or a different recruitment protocol. In order to be effective in reducing nonresponse, the identified subsets of the sample need two key features: 1) their probabilities of response can be impacted by changing design features, and 2) once they have responded, this can have an impact on estimates after adjustment. The National Agricultural Statistics Service (NASS) is investigating the use of adaptive design techniques in the Crops Acreage, Production, and Stocks Survey (Crops APS). The Crops APS is a survey of establishments which vary in size and, hence, in their potential impact on estimates. In order to identify subgroups for targeted designs, we conducted a simulation study that used Census of Agriculture (COA) data as proxies for similar survey items. Different patterns of nonresponse were simulated to identify subgroups that may reduce estimated nonresponse bias when their response propensities are changed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.