Challenges in measuring early childhood development (ECD) at scale have been documented, yet little is known about the specific difficulties related to questionnaire design and question interpretation. The purpose of this paper is to discuss the challenges of measuring ECD at scale in the context of household surveys and to show how to overcome them. The paper uses examples from the cognitive interviewing exercises that were conducted as part of the methodological work to develop a measure of ECD outcomes, the ECDI2030. It describes the methodological work carried out to inform the selection and improvement of question items and survey implementation tools as a fundamental step to reduce and mitigate systematic measurement error and improve data quality. The project consisted of a total of five rounds of testing, comprising 191 one-on-one, in-depth cognitive interviews across six countries (Bulgaria, India, Jamaica, Mexico, Uganda, and the USA). Qualitative data analysis methods were used to determine matches and mismatches between intention of items and false positives or false negative answers among subgroups of respondents. Key themes emerged that could potentially lead to systematic measurement error in population-based surveys on ECD: (1) willingness of child to perform task versus ability of child to perform task; (2) performing task versus performing task correctly; (3) identifying letters or numbers versus recognizing letters or numbers; (4) consistently performing task versus correctly performing task; (5) applicability of skills being asked versus observability of skills being asked; and (6) language production versus language comprehension. Through an iterative process of testing and subsequent revision, improvements were made to item wording, response options, and interviewer training instructions. Given the difficulties inherent in population-level data collection in the context of global monitoring, this study’s findings confirm the importance of cognitive testing as a crucial step in careful, culturally relevant, and sensitive questionnaire design and as a means to reduce response bias in cross-cultural contexts.
BackgroundSurvey researchers use monetary incentives as a strategy to motivate physicians’ survey participation. Experiments from general population surveys demonstrate that prepaid incentives increase response rates and lower survey administration costs relative to postpaid incentives. Experiments comparing these two incentive strategies have rarely been attempted with physician samples.MethodsA nationally representative sample of oncologists was recruited to participate in the National Survey of Precision Medicine in Cancer Treatment. To determine the optimal strategy for survey incentives, sample members were randomly assigned to receive a $50 prepaid incentive check or a $50 promised (postpaid) incentive check. Outcome measures for this incentives experiment include cooperation rates, speed of response, check-cashing behavior, and comparison of hypothetical costs for different incentive strategies.ResultsCooperation rates were considerably higher for sample members in the prepaid condition (41%) than in the postpaid condition (29%). Similar differences in cooperation rates were seen for physicians when stratified by region, size of the physician’s metropolitan statistical area, specialty, and gender by age. Survey responders in the prepaid condition responded earlier in the field period than those in the postpaid condition, thus requiring fewer contacts. In the prepaid group, 84% of sample members who responded with a completed survey cashed the incentive check and only 6% of nonresponders cashed the check. In the postpaid condition, 72% of survey responders cashed the check; nonresponders were not given a check. The relatively higher cooperation rates and earlier response of the responders in the prepaid condition was associated with a 30% cost savings for the prepaid condition compared to the postpaid incentive condition.ConclusionsThe results of this study suggest that the rewards of offering physicians a prepaid incentive check outweigh the possible risks of nonresponders cashing the check. The relative cost benefit of this strategy is likely to vary depending on the amount of the incentive relative to the costs of additional contact attempts to nonresponders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.