Unit nonresponse in panel data sets is often a source of bias. Why certain individuals attrite from longitudinal studies and how to minimize this phenomenon have been examined by researchers. However, this research has typically focused on data sets collected via telephone, postal mail, or face-to-face interviews. Moreover, this research usually focuses on using demographic characteristics such as educational attainment or income to explain variation in the incidence of unit nonresponse. We make two contributions to the existing literature. First, we examine the incidence of unit nonresponse in an Internet panel, a relatively new, and hence understudied, approach to gathering longitudinal data. Second, we hypothesize that personality traits, which typically remain unobserved and unmeasured in many data sets, affect the likelihood of unit nonresponse. Using data from an Internet panel that includes self-reported measures of personality in its baseline survey, we find that conscientiousness and openness to experience predict the incidence of unit nonresponse in subsequent survey waves, even after controlling for cognitive ability and demographic characteristics that are usually available and used by researchers to correct for unit nonresponse. We also test the potential to use paradata as proxies for personality traits related to unit nonresponse. Although we show that these proxies are correlated with personality traits and predict unit nonresponse in the same way as self-reported measures of personality traits, it is also possible that they capture other idiosyncrasies related to future survey completion. Our results suggest that obtaining explicit measures of personality traits or finding better proxies for them could be valuable for more fully addressing the potential bias that may arise as a result of unit nonresponse.
Unit nonresponse or attrition in panel data sets is often a source of nonrandom measurement error. Why certain individuals attrite from longitudinal studies and how to minimize this phenomenon have been examined by researchers. However, this research has typically focused on data sets collected via telephone, postal mail, or face-to-face interviews. Moreover, this research usually focuses on using demographic characteristics such as educational attainment or income to explain variation in the incidence of unit nonresponse. We make two contributions to the existing literature. First, we examine the incidence of unit nonresponse in an internet panel, a relatively new, and hence understudied, approach to gathering longitudinal data. Second, we hypothesize that personality traits, which typically remain unobserved and unmeasured in many data sets, affect the likelihood of unit nonresponse. Using data from an internet panel that includes self-reported measures of personality in its baseline survey, we find that conscientiousness and openness to experience predict the incidence of unit nonresponse in subsequent survey waves, even after controlling for cognitive ability and demographic characteristics that are usually available and used by researchers to correct for panel attrition. We also test the potential to use paradata as proxies for personality traits. Although we show that these proxies predict panel attrition in the same way as self-reported measures of personality traits, it is unclear to what extent they capture particular personality traits versus other individual circumstances related to future survey completion. Our results suggest that obtaining explicit measures of personality traits or finding better proxies for them are crucial to more fully address the potential bias that may arise as a result of panel attrition.
Monitoring of cognitive abilities in large-scale survey research is receiving increasing attention. Conventional cognitive testing, however, is often impractical on a population level highlighting the need for alternative means of cognitive assessment. We evaluated whether response times (RTs) to online survey items could be useful to infer cognitive abilities. We analyzed >5 million survey item RTs from >6000 individuals administered over 6.5 years in an internet panel together with cognitive tests (numerical reasoning, verbal reasoning, task switching/inhibitory control). We derived measures of mean RT and intraindividual RT variability from a multilevel location-scale model as well as an expanded version that separated intraindividual RT variability into systematic RT adjustments (variation of RTs with item time intensities) and residual intraindividual RT variability (residual error in RTs). RT measures from the location-scale model showed weak associations with cognitive test scores. However, RT measures from the expanded model explained 22–26% of the variance in cognitive scores and had prospective associations with cognitive assessments over lag-periods of at least 6.5 years (mean RTs), 4.5 years (systematic RT adjustments) and 1 year (residual RT variability). Our findings suggest that RTs in online surveys may be useful for gaining information about cognitive abilities in large-scale survey research.
Background Cognitive testing in large population surveys is frequently used to describe cognitive aging and determine the incidence rates, risk factors, and long-term trajectories of the development of cognitive impairment. As these surveys are increasingly administered on internet-based platforms, web-based and self-administered cognitive testing calls for close investigation. Objective Web-based, self-administered versions of 2 age-sensitive cognitive tests, the Stop and Go Switching Task for executive functioning and the Figure Identification test for perceptual speed, were developed and administered to adult participants in the Understanding America Study. We examined differences in cognitive test scores across internet device types and the extent to which the scores were associated with self-reported distractions in everyday environments in which the participants took the tests. In addition, national norms were provided for the US population. Methods Data were collected from a probability-based internet panel representative of the US adult population—the Understanding America Study. Participants with access to both a keyboard- and mouse-based device and a touch screen–based device were asked to complete the cognitive tests twice in a randomized order across device types, whereas participants with access to only 1 type of device were asked to complete the tests twice on the same device. At the end of each test, the participants answered questions about interruptions and potential distractions that occurred during the test. Results Of the 7410 (Stop and Go) and 7216 (Figure Identification) participants who completed the device ownership survey, 6129 (82.71% for Stop and Go) and 6717 (93.08% for Figure Identification) participants completed the first session and correctly responded to at least 70% of the trials. On average, the standardized differences across device types were small, with the absolute value of Cohen d ranging from 0.05 (for the switch score in Stop and Go and the Figure Identification score) to 0.13 (for the nonswitch score in Stop and Go). Poorer cognitive performance was moderately associated with older age (the absolute value of r ranged from 0.32 to 0.61), and this relationship was comparable across device types (the absolute value of Cohen q ranged from 0.01 to 0.17). Approximately 12.72% (779/6123 for Stop and Go) and 12.32% (828/6721 for Figure Identification) of participants were interrupted during the test. Interruptions predicted poorer cognitive performance (P<.01 for all scores). Specific distractions (eg, watching television and listening to music) were inconsistently related to cognitive performance. National norms, calculated as weighted average scores using sampling weights, suggested poorer cognitive performance as age increased. Conclusions Cognitive scores assessed by self-administered web-based tests were sensitive to age differences in cognitive performance and were comparable across the keyboard- and touch screen–based internet devices. Distraction in everyday environments, especially when interrupted during the test, may result in a nontrivial bias in cognitive testing.
Researchers have become increasingly interested in response times to survey items as a measure of cognitive effort. We used machine learning to develop a prediction model of response times based on 41 attributes of survey items (e.g., question length, response format, linguistic features) collected in a large, general population sample. The developed algorithm can be used to derive reference values for expected response times for most commonly used survey items.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.