2021
DOI: 10.1111/ijsa.12345
|View full text |Cite
|
Sign up to set email alerts
|

Introducing a supervised alternative to forced‐choice personality scoring: A test of validity and resistance to faking

Abstract: This paper examines a new personality assessment scoring approach labeled supervised forced choice scoring (SFCS), which aims to maximize construct validity of forced choice (FC) personality assessments. SFCS maximally weights FC responses to predict or "reproduce" honest, normative, and reliable personality scores using machine learning. In this proof of concept study, a graded response FC assessment was tested across several samples, and SFCS resulted in psychometric improvements over traditional FC scoring.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 62 publications
0
12
0
Order By: Relevance
“…Brown & Maydue‐Olivares, 2011, 2012, 2013). Although IRT‐based scoring rarely yields major psychometric improvements (Speer & Delacruz, 2021), future research should compare the effect of faking across different FC scoring algorithms when using interest inventories. Similarly, item response process tree models are gaining popularity as a method of detecting faking on Likert scales as well (Böckenholt, 2017; Sun et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Brown & Maydue‐Olivares, 2011, 2012, 2013). Although IRT‐based scoring rarely yields major psychometric improvements (Speer & Delacruz, 2021), future research should compare the effect of faking across different FC scoring algorithms when using interest inventories. Similarly, item response process tree models are gaining popularity as a method of detecting faking on Likert scales as well (Böckenholt, 2017; Sun et al, 2021).…”
Section: Discussionmentioning
confidence: 99%
“…One popular method of preventing faking is by using FC items (Christiansen et al, 1998;Converse et al, 2010;Heggestad et al, 2006;Joubert et al, 2015;Salgado & Tauríz, 2014). FC assessments have respondents choose, rank, sort, indicate higher frequency for, or indicate preference for two or more paired response options (Speer & Delacruz, 2021). FC items are thought be more challenging than Likert-scale items to fake because respondents cannot simply inflate their scores to present themselves in a more attractive manner (Christiansen et al, 2005;Converse et al, 2010).…”
Section: Preventing Fakingmentioning
confidence: 99%
“…That said, it should be noted that the effect was only based on nine samples, and therefore this effect may not be a stable representation of the true relationship between faked-FC and honest-SS scores. Furthermore, a large contributor of the sample size for the faked-FC and honest-SS effects came from Speer and Delacruz (2021), which used a new method of FC scoring designed specifically to maximize correlations between FC scores and SS scores 7 . Thus, this might have inflated the FC-faked SS-honest correlations, given the larger percentage contribution to the sample size for this analysis (25%).…”
Section: Resultsmentioning
confidence: 99%
“…Focusing first on convergent correlations with SS scores, the observed and corrected correlations were .55 for IRT, .49 for summative, and .72 for other/unknown. This latter large effect for other/unknown was mostly driven by the four samples from Speer and Delacruz (2021), where a new FC scoring method was used that was specifically developed to recreate existing SS scores (i.e., supervised FC scoring). Regardless, all these effects should be viewed with caution, given the presence of other moderators.…”
Section: Effect Of Scoring Methodsmentioning
confidence: 99%
“…Given the cost of a “bad hire” has been reported to range from $25 to $50,000 (a financial loss that some organizations may not be able survive in times of economic turmoil) and given that we may be approaching a time where organizations are hoping to acquire generic human capital resources to recover from a recession, organizations should try to deter response distortion as much as possible by implementing a variety of types of selection assessments. For example, implicit personality tests in which personality is measured indirectly (LeBreton et al, 2020) and various types of forced‐choice personality assessments (Cao & Drasgow, 2019; Speer & Delacruz, 2021) have been shown to be less susceptible to faking than their self‐report counterparts. If organizations feel compelled to rely on self‐reports, they should take additional steps to ensure data quality by implementing faking detection methods (Levashina et al, 2014), which (like washing one's hands on a regular basis) is good advice in normal times and especially in a pandemic.…”
Section: Discussionmentioning
confidence: 99%