Attracting high-performing applicants is a critical component of personnel selection and overall organizational success. In this study, the authors meta-analyzed 667 coefficients from 71 studies examining relationships between various predictors with job-organization attraction, job pursuit intentions, acceptance intentions, and job choice. The moderating effects of applicant gender, race, and applicant versus nonapplicant status were also examined. Results showed that applicant attraction outcomes were predicted by job-organization characteristics, recruiter behaviors, perceptions of the recruiting process, perceived fit, and hiring expectancies, but not recruiter demographics or perceived alternatives. Path analyses showed that applicant attitudes and intentions mediated the predictor-job choice relationships. The authors discuss the implications of these findings for recruiting theory, research, and practice.
The authors propose a new procedure for reducing faking on personality tests within selection contexts. This computer-based procedure attempts to identify and warn potential fakers early on during the testing process and then give them a chance for recourse. Two field studies were conducted to test the efficacy of the proposed procedure. Study 1 participants were 157 applicants competing for 10 staff positions at a large university located in a southern city in the People's Republic of China. In Study 1, potential fakers received a warning message, whereas nonfakers received a nonwarning (control) message. Study 2 participants were 386 Chinese college students applying for membership of a popular student organization at the same university where Study 1 was conducted. In Study 2, the warning and control messages were randomly assigned to all applicants. Results showed some promise for the proposed procedure, but several practical issues need to be considered.
SummaryWe compared the susceptibility of two emotional intelligence (EI) tests to faking. In a laboratory study using a within-subjects design, participants completed the EQ-i and the MSCEIT in two sessions. In the first session (i.e., the 'applicant condition'), participants were given a job description and asked to respond to the EI measures as though they were applying for that job. Participants returned 2 weeks later to repeat the tests in a 'non-applicant' condition in which they were told to answer as honestly as possible. Mean differences between conditions indicated that the EQ-i was more susceptible to faking than the MSCEIT. Faking indices predicted applicant condition EQ-i scores, after controlling for participants' nonapplicant EQ-i scores, whereas the faking indices were unrelated to applicant condition MSCEIT scores, when the non-applicant MSCEIT scores were controlled. Using top-down selection, participants were more likely to be selected based on their applicant condition EQ-i scores than their non-applicant EQ-i scores, but they had an equal likelihood of being selected based on their MSCEIT scores from each condition. Implications for the use of these two EI tests are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.