With the development of online data collection and instruments such as Amazon's Mechanical Turk (MTurk), the appearance of malicious software that generates responses to surveys in order to earn money represents a major issue, for both economic and scientific reasons. Indeed, even if paying one respondent to complete one questionnaire represents a very small cost, the multiplication of botnets providing invalid response sets may ultimately reduce study validity while increasing research costs. Several techniques have been proposed thus far to detect problematic human response sets, but little research has been undertaken to test the extent to which they actually detect nonhuman response sets. Thus, we proposed to conduct an empirical comparison of these indices. Assuming that most botnet programs are based on random uniform distributions of responses, we present and compare seven indices in this study to detect nonhuman response sets. A sample of 1,967 human respondents was mixed with different percentages (i.e., from 5% to 50%) of simulated random response sets. Three of the seven indices (i.e., response coherence, Mahalanobis distance, and person-total correlation) appear to be the best estimators for detecting nonhuman response sets. Given that two of those indices-Mahalanobis distance and person-total correlation-are calculated easily, every researcher working with online questionnaires could use them to screen for the presence of such invalid data.
Background: In the era of evidence-based medicine, decision-making about treatment of individual patients involves conscious, specific, and reasonable use of modern, best evidences. Diagnostic tests are usually obeying to the well-established quality standards of reproducibility and validity. Conversely, it could be tedious to assess the validation studies of tests used for diagnosis of mental and behavioral disorders. This work aims at establishing a methodological reference framework for the validation process of diagnostic tools for mental disorders. We implemented this framework as part of the protocol for the systematic review of burnout self-reported measures. The objectives of this systematic review are (a) to assess the validation processes used in each of the selected burnout measures, and (b) to grade the evidence of the validity and psychometric quality of each burnout measure. The optimum goal is to select the most valid measure(s) for use in medical practice and epidemiological research. Methods: The review will consist in systematic searches in MEDLINE, PsycINFO, and EMBASE databases. Two independent authors will screen the references in two phases. The first phase will be the title and abstract screening, and the second phase the full-text reading. There will be 4 inclusion criteria for the studies. Studies will have to (a) address the psychometric properties of at least one of the eight validated burnout measures (b) in their original language (c) with sample(s) of working adults (18 to 65 years old) (d) greater than 100. We will assess the risk of bias of each study using the Consensus-based Standards for the selection of health Measurement Instruments checklist. The outcomes of interest will be the face validity, response validity, internal structure validity, convergent validity, discriminant validity, predictive validity, internal consistency, test-retest reliability, and alternate form reliability, enabling assessing the psychometric properties used to validate the eight concerned burnout measures. We will examine the outcomes using the reference framework for validating measures of mental disorders. Results will be synthetized descriptively and, if there is enough homogenous data, using a meta-analysis.
Cognitive style is thought to be a stable marker of one’s way to approach mental operations. While of wide interest over the last decades, its operationalization remains a challenge. The literature indicates that cognitive styles assessed via i) questionnaires are predicted by personality and ii) performance tests (e.g., Group Embedded Figures Test; GEFT) are related to general intelligence. In the first study, we tested the psychometric relationship between the Cognitive Style Index questionnaire (CSI) and personality inventories (NEO Five Factor Inventory; NEO-FFI, HEXACO Personality Inventory Revised; HEXACO-PI-R). In the second study, we assessed the CSI, NEO-FFI, GEFT and a general intelligence test (Raven’s Standard Progressive Matrices Test; RSMT). We found that CSI scores were largely predicted by personality and that CSI was uncorrelated with GEFT performance. Instead, better performance on the GEFT was associated with better performance on the RSMT. We conclude that i) cognitive style questionnaires overlap with personality inventories, ii) cognitive style performance tests do not measure cognitive styles and should not be used as such and iii) the cognitive style concept needs to be assessed with alternative measurement types. We discuss possible future directions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.