Job analysis data are largely judgements from subject matter experts (SMEs), judgements with unknown accuracy. To date, accuracy has been inferred largely based on inter‐rater reliability or agreement between SMEs and without reference to an external criterion. The current research examined job analysis rating accuracy by comparing SME importance ratings of knowledge, skills, abilities, and other requirements (KSAOs) with the validity of measures of these same KSAOs in predicting job performance. We tested hypotheses about whether SME judgement accuracy is moderated by SME job tenure, industry experience, role, self‐reported knowledge of the job, and data scrubbing. Four independent tests involving 48 separate validation studies were conducted. In three of the four samples, there was a large (r = .50 range) relationship between trait importance and trait validity, showing that job analysis ratings can be directly related to test validities and serve as a measure of job analysis accuracy. Moderator analyses showed that the best results may come from supervisors, rather than incumbents, and those who know the job extremely well (there were no differences due to SME job tenure, industry experience, or deletion of outliers). Showing a direct relationship between SME judgements and actual criterion‐related validity provides a new lens for operationalizing accuracy in job analysis research.
Practitioner Points
This study demonstrates that test validities can serve as a measure of accuracy, providing a new avenue for job analysis research.
The most accurate job analysis ratings came from supervisors and those who reported knowing the job extremely well.