This article provides an overview and guide to implementing item response theory (IRT) measurement equivalence (ME) or differential item functioning (DIF) analysis. We (a) present the need for establishing IRT ME/DIF analysis, (b) discuss the similarities and differences between factoranalytic ME/DIF analysis, (c) review commonly used IRT ME/DIF indices and procedures, (d) provide three illustrations to two recommended IRT procedures, and (e) furnish recommendations for conducting IRT ME/DIF. We conclude by discussing future directions for IRT ME/DIF research.
Forced-choice (FC) is a popular format for developing personality measures, where individuals must choose 1 or multiple statements from several options. Although FC measures have been proposed to reduce score inflation in high-stakes assessments, inconsistent results have been found in empirical studies regarding their effectiveness. In this study, we conducted a meta-analysis of studies comparing FC personality measure scores between low-stakes and (both simulated and actual) high-stakes situations. Results suggest that the overall score inflation effect size for FC personality measures is 0.06. In selection scenarios, score inflation for FC scales is much lower than the meta-analytic effect size for single-statement personality measures across most personality facets. The score inflation effect size was also found to vary across FC scale characteristics and study design factors. Specifically, FC scales were consistently found to be more faking-resistant when constructed with statements balanced in social desirability and with responses scored via a normative approach. FC scales constructed with the PICK format were also found to be faking-resistant, while more applicant-incumbent studies are needed to examine the fakability of MOLE FC scales. Evidence at the overall level supports the use of multidimensional scales and extremity balance of statements, but results are not consistent across personality facets, or when large samples are excluded. Personality facets of high relevance to the target job were found to exhibit larger inflation than facets of low relevance to the target job. Practical guidance on constructing and using FC personality measures for personnel selection purposes is provided.
Recent advances in text mining have provided new methods for capitalizing on the voluminous natural language text data created by organizations, their employees, and their customers. Although often overlooked, decisions made during text preprocessing affect whether the content and/or style of language are captured, the statistical power of subsequent analyses, and the validity of insights derived from text mining. Past methodological articles have described the general process of obtaining and analyzing text data, but recommendations for preprocessing text data were inconsistent. Furthermore, primary studies use and report different preprocessing techniques. To address this, we conduct two complementary reviews of computational linguistics and organizational text mining research to provide empirically grounded text preprocessing decision-making recommendations that account for the type of text mining conducted (i.e., open or closed vocabulary), the research question under investigation, and the data set’s characteristics (i.e., corpus size and average document length). Notably, deviations from these recommendations will be appropriate and, at times, necessary due to the unique characteristics of one’s text data. We also provide recommendations for reporting text mining to promote transparency and reproducibility.
The factor structure of the Values in Action Inventory of Strengths (VIA-IS; Peterson & Seligman, 2004) has not been well established as a result of methodological challenges primarily attributable to a global positivity factor, item cross-loading across character strengths, and questions concerning the unidimensionality of the scales assessing character strengths. We sought to overcome these methodological challenges by applying exploratory structural equation modeling (ESEM) at the item level using a bifactor analytic approach to a large sample of 447,573 participants who completed the VIA-IS with all 240 character strengths items and a reduced set of 107 unidimensional character strength items. It was found that a 6-factor bifactor structure generally held for the reduced set of unidimensional character strength items; these dimensions were justice, temperance, courage, wisdom, transcendence, humanity, and an overarching general factor that is best described as dispositional positivity. (PsycINFO Database Record
This study investigated the psychometric properties of 3 frequently administered emotional intelligence (EI) scales (Wong and Law Emotional Intelligence Scale [WLEIS], Schutte Self-Report Emotional Intelligence Test [SEIT], and Trait Emotional Intelligence Questionnaire [TEIQue]), which were developed on the basis of different theoretical frameworks (i.e., ability EI and mixed EI). By conducting item response theory (IRT) analyses, the authors examined the item parameters and compared the fits of 2 response process models (i.e., dominance model and ideal point model) for these scales with data from 355 undergraduate sample recruited from the subject pool. Several important findings were obtained. First, the EI scales seem better able to differentiate individuals at low trait levels than high trait levels. Second, a dominance model showed better model fit to the self-report ability EI scale (WLEIS) and also fit better with most subfactors of the SEIT, except for the mood regulation/optimism factor. Both dominance and ideal point models fit a self-report mixed EI scale (TEIQue). Our findings suggest (a) the EI scales should be revised to include more items at moderate and higher trait levels; and (b) the nature of the EI construct should be considered during the process of scale development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.