2019
DOI: 10.1111/ijsa.12237
|View full text |Cite
|
Sign up to set email alerts
|

Social intelligence and interview accuracy: Individual differences in the ability to construct interviews and rate accurately

Abstract: This research examined differences in interviewers’ ability to identify effective interview questions and to accurately rate interviewees’ responses. Given the theoretical association between these interview activities and the construct of social intelligence (SI), a performance‐based measure of SI was developed utilizing situational judgment test methodology. The initial step was to examine evidence of the psychometric properties and construct validity of the new SI measure. The SI measure, a test of general … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
23
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(27 citation statements)
references
References 83 publications
4
23
0
Order By: Relevance
“…More specifically, Bryan and Mayer (2021) reported a meta-analytic correlation of r = .43 (95% CI = 0.39, 0.48) between people-centered ability measures across 87 unique studies. Our observed correlation may also be somewhat attenuated by unshared method variance due to the differences in format between the two tests (Spector et al, 2019) Even though both the Shapes Test and SSIT were developed to assess social intelligence, and even though both exhibit construct validity evidence in assessing their intended construct domains (Brown et al, 2019;Speer et al, 2019), an additional layer of validity evidence was obtained for this study. Specifically, content validity judgments were independently made for each test as to the degree to which the tests measure social intelligence.…”
Section: Written Ssitmentioning
confidence: 62%
See 2 more Smart Citations
“…More specifically, Bryan and Mayer (2021) reported a meta-analytic correlation of r = .43 (95% CI = 0.39, 0.48) between people-centered ability measures across 87 unique studies. Our observed correlation may also be somewhat attenuated by unshared method variance due to the differences in format between the two tests (Spector et al, 2019) Even though both the Shapes Test and SSIT were developed to assess social intelligence, and even though both exhibit construct validity evidence in assessing their intended construct domains (Brown et al, 2019;Speer et al, 2019), an additional layer of validity evidence was obtained for this study. Specifically, content validity judgments were independently made for each test as to the degree to which the tests measure social intelligence.…”
Section: Written Ssitmentioning
confidence: 62%
“…A total score was calculated by summing all the points scored across both most and least effective choices for all 29 items (maximum possible score = 110). The SSIT demonstrates construct validity evidence across multiple studies (Speer et al, 2019).…”
Section: Written Ssitmentioning
confidence: 98%
See 1 more Smart Citation
“…One of the reasons that our findings did not support the moderating effect of the number of job-analysis dimensions on candidates’ job performance in Hypotheses 2a and 3a was that the number of dimensions might not reflect how good the choice of dimensions by the interviewer was. Speer et al (2019) find in their research that interviewers’ social intelligence and general mental ability are important factors that help interviewers choose more suitable interview questions and rate prospective employees accurately. Including interviewers’ personality and intelligence data may help fill this gap, and show how the different aspects of a radar chart should be chosen to predict candidates’ performance more accurately.…”
Section: Discussionmentioning
confidence: 99%
“…Unlike brainteaser questions, OPQs (1) do not require analytical problem solving, and (2) are related to aspects of an applicant's personality or biographical background. Existing research has either ignored this type of question altogether (Honer et al, 2007;Wright et al, 2012) or did not differentiate them from their cognitive-demanding counterparts (i.e., 'brainteasers'; Highhouse et al, 2019;Speer et al, 2019). Although there is considerable variability in how hiring professionals are using OPQs in a job interview context for decision making, recruitment research suggests that the use of OPQs could impact job seekers and applicants' experiences during the hiring process, which could have both immediate effects on applicant reactions (e.g., interview motivation) as well as downstream effects on recruitment outcomes (e.g., intentions to apply) (Breaugh, 2013).…”
Section: Horse-sized Duck or Duck-sized Horses?mentioning
confidence: 99%