Integrity tests have become a prominent predictor within the selection literature over the past few decades. However, some researchers have expressed concerns about the criterion-related validity evidence for such tests because of a perceived lack of methodological rigor within this literature, as well as a heavy reliance on unpublished data from test publishers. In response to these concerns, we meta-analyzed 104 studies (representing 134 independent samples), which were authored by a similar proportion of test publishers and non-publishers, whose conduct was consistent with professional standards for test validation, and whose results were relevant to the validity of integrity-specific scales for predicting individual work behavior. Overall mean observed validity estimates and validity estimates corrected for unreliability in the criterion (respectively) were .12 and .15 for job performance, .13 and .16 for training performance, .26 and .32 for counterproductive work behavior, and .07 and .09 for turnover. Although data on restriction of range were sparse, illustrative corrections for indirect range restriction did increase validities slightly (e.g., from .15 to .18 for job performance). Several variables appeared to moderate relations between integrity tests and the criteria. For example, corrected validities for job performance criteria were larger when based on studies authored by integrity test publishers (.27) than when based on studies from non-publishers (.12). In addition, corrected validities for counterproductive work behavior criteria were larger when based on self-reports (.42) than when based on other-reports (.11) or employee records (.15).
A common belief among researchers is that vocational interests have limited value for personnel selection. However, no comprehensive quantitative summaries of interests validity research have been conducted to substantiate claims for or against the use of interests. To help address this gap, we conducted a meta-analysis of relations between interests and employee performance and turnover using data from 74 studies and 141 independent samples. Overall validity estimates (corrected for measurement error in the criterion but not for range restriction) for single interest scales were .14 for job performance, .26 for training performance, -.19 for turnover intentions, and -.15 for actual turnover. Several factors appeared to moderate interest-criterion relations. For example, validity estimates were larger when interests were theoretically relevant to the work performed in the target job. The type of interest scale also moderated validity, such that corrected validities were larger for scales designed to assess interests relevant to a particular job or vocation (e.g., .23 for job performance) than for scales designed to assess a single, job-relevant realistic, investigative, artistic, social, enterprising, or conventional (i.e., RIASEC) interest (.10) or a basic interest (.11). Finally, validity estimates were largest when studies used multiple interests for prediction, either by using a single job or vocation focused scale (which tend to tap multiple interests) or by using a regression-weighted composite of several RIASEC or basic interest scales. Overall, the results suggest that vocational interests may hold more promise for predicting employee performance and turnover than researchers may have thought.
Recent reports suggest that an increasing number of organizations are using information from social media platforms such as Facebook.com to screen job applicants. Unfortunately, empirical research concerning the potential implications of this practice is extremely limited. We address the use of social media for selection by examining how recruiter ratings of Facebook profiles fare with respect to two important criteria on which selection procedures are evaluated: criterion-related validity and subgroup differences (which can lead to adverse impact). We captured Facebook profiles of college students who were applying for full-time jobs, and recruiters from various organizations reviewed the profiles and provided evaluations. We then followed up with applicants in their new jobs. Recruiter ratings of applicants’ Facebook information were unrelated to supervisor ratings of job performance (rs = −.13 to –.04), turnover intentions (rs = −.05 to .00), and actual turnover (rs = −.01 to .01). In addition, Facebook ratings did not contribute to the prediction of these criteria beyond more traditional predictors, including cognitive ability, self-efficacy, and personality. Furthermore, there was evidence of subgroup difference in Facebook ratings that tended to favor female and White applicants. The overall results suggest that organizations should be very cautious about using social media information such as Facebook to assess job applicants.
Social media (SM) pervades our society. One rapidly growing application of SM is its use in
We tested the longstanding belief that performance is a function of the interaction between cognitive ability and motivation. Using raw data or values obtained from primary study authors as input (k = 40 to 55; N = 8,507 to 11,283), we used meta-analysis to assess the strength and consistency of the multiplicative effects of ability and motivation on performance. A triangulation of evidence based on several types of analyses revealed that the effects of ability and motivation on performance are additive rather than multiplicative. For example, the additive effects of ability and motivation accounted for about 91% of the explained variance in job performance, whereas the ability-motivation interaction accounted for only about 9% of the explained variance. In addition, when there was an interaction, it did not consistently reflect the predicted form (i.e., a stronger ability-performance relation when motivation is higher). Other key findings include that ability was relatively more important to training performance and to performance on work-related tasks in laboratory studies, whereas ability and motivation were similarly important to job performance. In addition, statelike measures of motivation were better predictors of performance than were traitlike measures. These findings have implications for theories about predictors of performance, state versus trait motivation, and maximal versus typical performance. They also have implications for talent management practices concerned with human capital acquisition and the prediction of employee performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.