With the development of online data collection and instruments such as Amazon's Mechanical Turk (MTurk), the appearance of malicious software that generates responses to surveys in order to earn money represents a major issue, for both economic and scientific reasons. Indeed, even if paying one respondent to complete one questionnaire represents a very small cost, the multiplication of botnets providing invalid response sets may ultimately reduce study validity while increasing research costs. Several techniques have been proposed thus far to detect problematic human response sets, but little research has been undertaken to test the extent to which they actually detect nonhuman response sets. Thus, we proposed to conduct an empirical comparison of these indices. Assuming that most botnet programs are based on random uniform distributions of responses, we present and compare seven indices in this study to detect nonhuman response sets. A sample of 1,967 human respondents was mixed with different percentages (i.e., from 5% to 50%) of simulated random response sets. Three of the seven indices (i.e., response coherence, Mahalanobis distance, and person-total correlation) appear to be the best estimators for detecting nonhuman response sets. Given that two of those indices-Mahalanobis distance and person-total correlation-are calculated easily, every researcher working with online questionnaires could use them to screen for the presence of such invalid data.
Recent research has shown that, in a university context, mastery goals are highly valued and that students may endorse these goals either because they believe in their utility (i.e., social utility), in which case mastery goals are positively linked to achievement, or to create a positive image of themselves (i.e., social desirability), in which case mastery goals do not predict academic achievement. The present two experiments induced high versus neutral levels of mastery goals' social utility and social desirability. Results confirmed that mastery goals predicted performance only when these goals were presented as socially useful but not presented as socially desirable, especially among low achievers, those who need mastery goals the most to succeed.
Although it has been assumed that the motivation to learnor mastery goal endorsementpositively predicts learning achievement, most empirical findings fail to demonstrate this relationship. In the present research, conducted in a Swiss high school, we adopted a social value approach to test the hypothesis that adolescent students' mastery goals do in fact predict learning, but only if these goals are perceived as highly useful for scholarly success (high social utility), and are not endorsed as a means to be appreciated by the teachers (low social desirability), a finding that has previously been observed among college students and on teacher-graded achievement measures only. Results demonstrate that in spite of potential peculiarities of an adolescent population, individual differences in mastery goals' perceived social utility and desirability moderate the mastery goal endorsement-learning achievement relation. Findings are discussed with regard to both theory development and educational practice.
The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.Keywords: functional method, exploratory factor analysis, psychometrics, response reliability, response validity, self-rated questionnaires IntroductionFor about a century, psychological testing has become an important aspect of psychologists' activity with an ever-growing importance. Nonetheless, despite formidable developments in psychometrics during the last several decades, most clinical and scientific practices covering psychological assessment continue to refer to the nearly unchanged method inherited from classic test theory (CTT). Many critics have been raised against the classic method: In particular, it is too often assumed that items have no second loadings, and that each item has the same weight, which is optimal for confirmatory factor analysis, but it also leads to a loss in reliability. Furthermore, though CTT assumes requirements in test validity (e.g., satisfactory reliability, concurrent validity), few requirements have been formulated concerning individuals' ways of responding to self-administered questionnaires.The functional method is a new method that improves test reliability and provides indices of one's response validity to self-administered tests. First, three major issues of classic testing are Dupuis et al. Introduction to the functional method examined; then item response theory (IRT) is presented as the main alternative to CTT. Last, the functional method is presented with a focus on how it deals with these problems and compared with CTT and IRT. The Problem of Response Intrinsic QualityIt is well known that psychological tests can be biased both intentionally and unintentionally; this can invalidate one's test results, but can even invalidate an entire test validation (Caldwell-Andrews et...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.