2023
DOI: 10.1111/peps.12587
|View full text |Cite
|
Sign up to set email alerts
|

A simulation of the impacts of machine learning to combine psychometric employee selection system predictors on performance prediction, adverse impact, and number of dropped predictors

Abstract: We compare modern machine learning (MML) techniques to ordinary least squares (OLS) regression on out‐of‐sample (OOS) operational validity, adverse impact, and dropped predictor counts within a common selection scenario: the prediction of job performance from a battery of diverse psychometrically‐validated tests. In total, scores from 1.2 billion validation study participants were simulated to describe outcomes across 31,752 combinations selection system design and scoring decisions. The most consistently valu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 87 publications
0
4
0
Order By: Relevance
“…When examining ML algorithms compared to traditional methods with large samples, the improvement in prediction is probably not going to be large in most instances. This is illustrated by Koenig et al (2023) Studies 3-5, and especially by Landers et al (2023). These studies were relevant for the special issue specifically because they did not report exceptional findings, which leads to overestimates in our literature and future meta-analyses.…”
Section: Lessons Learnedmentioning
confidence: 93%
See 3 more Smart Citations
“…When examining ML algorithms compared to traditional methods with large samples, the improvement in prediction is probably not going to be large in most instances. This is illustrated by Koenig et al (2023) Studies 3-5, and especially by Landers et al (2023). These studies were relevant for the special issue specifically because they did not report exceptional findings, which leads to overestimates in our literature and future meta-analyses.…”
Section: Lessons Learnedmentioning
confidence: 93%
“…Another concern is that building an ML model based on test items rather than total test scores or scales violates the inferences we can make about the construct validity, which are based on the total test score or scale. Landers et al (2023) suggest that you cannot infer construct validity from a scale to its individual items but, if all the items are included in the model and scored separately, to what extent can we infer the model has the construct validity of the scale or total score that includes all the items? This issue may be even more complex if items differ not only in the weights they receive in a model but also if curvilinear relationships of items scored in the model.…”
Section: Lessons Learnedmentioning
confidence: 99%
See 2 more Smart Citations