Multilevel data are a reality for many disciplines. Currently, although multiple options exist for the treatment of multilevel data, most disciplines strictly adhere to one method for multilevel data regardless of the specific research design circumstances. The purpose of this Monte Carlo simulation study is to compare several methods for the treatment of multilevel data specifically when there is random coefficient variation in small samples. The methods being compared are fixed effects modeling (the industry standard in business and managerial sciences), multilevel modeling using restricted maximum likelihood (REML) estimation (the industry standard in the social and behavioral sciences), multilevel modeling using the Kenward-Rogers correction, and Bayesian estimation using Markov Chain Monte Carlo. Results indicate that multilevel modeling does have an advantage over fixed effects modeling when Level 2 slope parameter variance exists. Bayesian estimation of multilevel effects can be advantageous over traditional multilevel modeling using REML, but only when prior probabilities are correctly specified. Results are presented in terms of Type I error, power, parameter estimation bias, empirical parameter estimate standard error, and parameter 95% coverage rates, and recommendations are presented.
In interviewer-administered omnibus surveys, burdensome questions asked early in a survey may result in lower quality responses to questions asked later in a survey. Two examples of these burdensome questions are social network questions, wherein respondents are asked about members of their personal network, and knowledge questions, wherein respondents are asked to provide a factually correct response to a question. In this study, we explore how the presence of potentially burdensome questions are associated with item nonresponse and acquiescence rates on subsequent survey questions, and whether this effect differs by respondent age and education. We use data from the 2010 General Social Survey (AAPOR RR5 ¼ 70.3%, AAPOR 2016), which experimentally varied the location of a social network module and the presence of a knowledge question module. Those who received knowledge questions had higher item nonresponse rates on subsequent questions than those who did not receive knowledge questions, but the quality of responses did not differ by the presence of social network questions. Further, respondents with different characteristics were not differentially burdened by the knowledge questions or the social network questions. We conclude that knowledge questions may be better asked near the end of omnibus surveys to preserve the response quality for subsequent questions.
Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in calculated readability across measures. Some question formats, including those that are part of a battery, require important decisions that have large effects on the estimated readability of survey items. Other question evaluation tools, such as the Question Understanding Aid (QUAID) and the Survey Quality Predictor (SQP), may identify similar problems in questions, making readability measures less useful. We find little overlap between QUAID, SQP, and the readability measures, and little differentiation in the tools’ prediction of item nonresponse rates. Questionnaire designers are encouraged to use multiple question evaluation tools and develop readability measures specifically for survey questions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.