2021
DOI: 10.1111/ijsa.12353
|View full text |Cite
|
Sign up to set email alerts
|

Can you crowdsource expertise? Comparing expert and crowd‐based scoring keys for three situational judgment tests

Abstract: This study seeks to answer a simple question: Is it possible to develop a scoring key for a situational judgment test (SJT) without a pool of subject matter experts (SMEs)? The SJT method is widely studied and used for selection in both occupational and educational settings (Oswald et al., 2004;Lievens & Sackett, 2012). SJTs are typically designed to measure procedural knowledge about how to behave effectively in a particular job (Motowidlo et al., 2006). Along these lines, SJT items are typically scored using… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 87 publications
2
4
0
Order By: Relevance
“…6 A growing body of research suggests that MTurk can be an adequate source of data if researchers are attentive to data quality issues (Cheung et al, 2017;Michel et al, 2018;Paolacci & Chandler, 2014). Specific to SJTs, Brown et al (2021) found that MTurk participants' ratings of SJTs correlated quite strongly (r values ≥ .88) with ratings from subject matter experts, a finding supported by our own comparisons (presented later). Only participants who were US-based, at least 18 years old, and had at least 95% of their previous tasks accepted by other requesters were invited to participate (also, see Table 2 for additional data quality controls).…”
Section: Study 2: Verification Of Sjt Item Propertiessupporting
confidence: 78%
“…6 A growing body of research suggests that MTurk can be an adequate source of data if researchers are attentive to data quality issues (Cheung et al, 2017;Michel et al, 2018;Paolacci & Chandler, 2014). Specific to SJTs, Brown et al (2021) found that MTurk participants' ratings of SJTs correlated quite strongly (r values ≥ .88) with ratings from subject matter experts, a finding supported by our own comparisons (presented later). Only participants who were US-based, at least 18 years old, and had at least 95% of their previous tasks accepted by other requesters were invited to participate (also, see Table 2 for additional data quality controls).…”
Section: Study 2: Verification Of Sjt Item Propertiessupporting
confidence: 78%
“…This is strikingly similar to proportion consensus scoring approaches which have been used to score measures of emotional intelligence (Legree et al, 2005), tacit knowledge (Hedlund et al, 2003) and practical intelligence (Fox & Spector, 2000). Likewise, consensus scores have also been found to converge with expert-based scoring keys for different SJTs (Brown et al, 2021).…”
Section: Social Desirability Consensus Scoring and Social Normssupporting
confidence: 70%
“…Responses were scored as either 0, 1, 2, or 3 based on effectiveness ratings gathered from SMEs. Scores on this test have been reported to correlate positively with other performance measures of social and emotional intelligence and the ability to evaluate recordings of structured interviews or to identify more effective interview questions (Brown et al, 2021;Speer et al, 2019;Speer et al, 2020). The workplace skill SJT demonstrated weaker internal consistency (α = .36).…”
Section: Methodsmentioning
confidence: 95%
See 1 more Smart Citation
“…Five hundred working adults recruited through Amazon's Mechanical Turk (“MTurk”) participated in this study. We chose MTurk because it is more generalizable than student samples, is appropriate for workplace research (Highhouse & Zhang, 2015), and has been used when examining questions related to selection (e.g., Brown et al, 2021; Zhang et al, 2018). Upon completion of the survey, participants were paid $2.50.…”
Section: Methodsmentioning
confidence: 99%