Researchers and practitioners are often interested in assessing employee attitudes and work perceptions. Although such perceptions are typically measured using Likert surveys or some other closed-end numerical rating format, many organizations also have access to large amounts of qualitative employee data. For example, open-ended comments from employee surveys allow workers to provide rich and contextualized perspectives about work. Unfortunately, there are practical challenges when trying to understand employee perceptions from qualitative data. Given this, the present study investigated whether natural language processing (NLP) algorithms could be developed to automatically score employee comments according to important work attitudes and perceptions. Using a large sample of employees, algorithms were developed to translate text into scores that reflect what comments were about (theme scores) and how positively targeted constructs were described (valence scores) for 28 work constructs. The resulting algorithms and scores are labeled the Text-Based Attitude and Perception Scoring (TAPS) dictionaries, which are made publicly available and were built using a mix of count-based scoring and transformer neural networks. The psychometric properties of the TAPS scores were then investigated. Results showed that theme scores differentiated responses based on their likelihood to discuss specific constructs. Additionally, valence scores exhibited strong evidence of reliability and validity, particularly, when analyzed on text responses that were more relevant to the construct of interest. This suggests that researchers and practitioners should explicitly design text prompts to elicit construct-related information if they wish to accurately assess work attitudes and perceptions via NLP.
Personality assessments help identify qualified job applicants when making hiring decisions and are used broadly in the organizational sciences. However, many existing personality measures are quite lengthy, and companies and researchers frequently seek ways to shorten personality scales. The current research investigated the effectiveness of a new scale-shortening method called supervised construct scoring (SCS), testing the efficacy of this method across two applied samples. Using a combination of machine learning with content validity considerations, we show that multidimensional personality scales can be significantly shortened while maintaining reliability and validity, and especially when compared to traditional shortening methods. In Study 1, we shortened a 100-item personality assessment of DeYoung et al.'s 10 facets, producing a scale 26% the original length. SCS scores exhibited strong evidence of reliability, convergence with full scale scores, and criterion-related validity. This measure, labeled the Short 10, is made freely available. In Study 2, we applied SCS to shorten an operational police personality assessment. By using SCS, we reduced test length to 25% of the original length while maintaining similar levels of reliability and criterion-related validity when predicting job performance ratings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.