Against best practice recommendations, interviewers prefer unstructured interviews where they are not beholden to regimentation. In cases where interviews are less structured, the interviewer typically generates his or her own set of interview questions. Even in structured interviews though, the initial interview content must be generated by someone. Thus, it is important to understand the different factors that influence what types of questions individuals generate in interview contexts. The current research aims to understand the types of interview questions individuals generate, factors that affect the quality of those questions, how skill in generating interview questions relates to skill in evaluating existing interview questions, and how individual traits relate to skill in generating interview questions. Results show that respondents who are skilled in evaluating existing interview questions are also skilled in writing interview questions from scratch, and these skills relate to general mental ability and social intelligence. Respondents generated questions that most commonly assessed applicant history and self-perceived applicant characteristics, whereas only 30% of questions generated were situational or behavioral.
Forced-choice (FC) personality assessments have shown potential in mitigating the effects of faking. Yet despite increased attention and usage, there exist gaps in understanding the psychometric properties of FC assessments, and particularly when compared to traditional single-stimulus (SS) measures. The present study conducted a series of meta-analyses comparing the psychometric properties of FC and SS assessments after placing them on an equal playing field-by restricting to only studies that examined matched assessments of each format, and thus, avoiding the extraneous confound of using comparisons from different contexts (Sackett, 2021). Matched FC and SS assessments were compared in terms of criterionrelated validity and susceptibility to faking in terms of mean shifts and validity attenuation. Additionally, the correlation between FC and SS scores was examined to help establish construct validity evidence. Results showed that matched FC and SS scores exhibit strong correlations with one another (ρ = .69), though correlations weakened when the FC measure was faked (ρ = .59) versus when both measures were taken honestly (ρ = .73). Average scores increased from honest to faked samples for both FC (d = .41) and SS scores (d = .75), though the effect was more pronounced for SS measures and with larger effects for contextdesirable traits (FC d = .61, SS d = .99). Criterion-related validity was similar between matched FC and SS measures overall. However, when considering validity in faking contexts, FC scores exhibited greater validity than SS measures. Thus, although FC measures are not completely immune to faking, they exhibit meaningful benefits over SS measures in contexts of faking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.