This paper examines a new personality assessment scoring approach labeled supervised forced choice scoring (SFCS), which aims to maximize construct validity of forced choice (FC) personality assessments. SFCS maximally weights FC responses to predict or "reproduce" honest, normative, and reliable personality scores using machine learning. In this proof of concept study, a graded response FC assessment was tested across several samples, and SFCS resulted in psychometric improvements over traditional FC scoring.Correlations with aligned single-stimulus trait scores (taken honestly) were strong, both when the FC measure was taken honestly and when taken in induced applicant settings. SFCS scores also exhibited small shifts in average scores between honest and faked conditions and were predictive of organizational citizenship behaviors, employee engagement, and leadership emergence at work. Although SFCS showed merit in this proof of concept study, it is unclear how well results will generalize to new FC measures, and we urge more research on this scoring method.
Against best practice recommendations, interviewers prefer unstructured interviews where they are not beholden to regimentation. In cases where interviews are less structured, the interviewer typically generates his or her own set of interview questions. Even in structured interviews though, the initial interview content must be generated by someone. Thus, it is important to understand the different factors that influence what types of questions individuals generate in interview contexts. The current research aims to understand the types of interview questions individuals generate, factors that affect the quality of those questions, how skill in generating interview questions relates to skill in evaluating existing interview questions, and how individual traits relate to skill in generating interview questions. Results show that respondents who are skilled in evaluating existing interview questions are also skilled in writing interview questions from scratch, and these skills relate to general mental ability and social intelligence. Respondents generated questions that most commonly assessed applicant history and self-perceived applicant characteristics, whereas only 30% of questions generated were situational or behavioral.
Researchers and practitioners are often interested in assessing employee attitudes and work perceptions. Although such perceptions are typically measured using Likert surveys or some other closed-end numerical rating format, many organizations also have access to large amounts of qualitative employee data. For example, open-ended comments from employee surveys allow workers to provide rich and contextualized perspectives about work. Unfortunately, there are practical challenges when trying to understand employee perceptions from qualitative data. Given this, the present study investigated whether natural language processing (NLP) algorithms could be developed to automatically score employee comments according to important work attitudes and perceptions. Using a large sample of employees, algorithms were developed to translate text into scores that reflect what comments were about (theme scores) and how positively targeted constructs were described (valence scores) for 28 work constructs. The resulting algorithms and scores are labeled the Text-Based Attitude and Perception Scoring (TAPS) dictionaries, which are made publicly available and were built using a mix of count-based scoring and transformer neural networks. The psychometric properties of the TAPS scores were then investigated. Results showed that theme scores differentiated responses based on their likelihood to discuss specific constructs. Additionally, valence scores exhibited strong evidence of reliability and validity, particularly, when analyzed on text responses that were more relevant to the construct of interest. This suggests that researchers and practitioners should explicitly design text prompts to elicit construct-related information if they wish to accurately assess work attitudes and perceptions via NLP.
Forced-choice (FC) personality assessments have shown potential in mitigating the effects of faking. Yet despite increased attention and usage, there exist gaps in understanding the psychometric properties of FC assessments, and particularly when compared to traditional single-stimulus (SS) measures. The present study conducted a series of meta-analyses comparing the psychometric properties of FC and SS assessments after placing them on an equal playing field-by restricting to only studies that examined matched assessments of each format, and thus, avoiding the extraneous confound of using comparisons from different contexts (Sackett, 2021). Matched FC and SS assessments were compared in terms of criterionrelated validity and susceptibility to faking in terms of mean shifts and validity attenuation. Additionally, the correlation between FC and SS scores was examined to help establish construct validity evidence. Results showed that matched FC and SS scores exhibit strong correlations with one another (ρ = .69), though correlations weakened when the FC measure was faked (ρ = .59) versus when both measures were taken honestly (ρ = .73). Average scores increased from honest to faked samples for both FC (d = .41) and SS scores (d = .75), though the effect was more pronounced for SS measures and with larger effects for contextdesirable traits (FC d = .61, SS d = .99). Criterion-related validity was similar between matched FC and SS measures overall. However, when considering validity in faking contexts, FC scores exhibited greater validity than SS measures. Thus, although FC measures are not completely immune to faking, they exhibit meaningful benefits over SS measures in contexts of faking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.