2017
DOI: 10.1111/joop.12186
|View full text |Cite
|
Sign up to set email alerts
|

A meta‐analysis of interview length on reliability and validity

Abstract: First impressions are frequently seen as a biasing factor that may prevent an interviewer from forming a comprehensive assessment of the applicant. However, research has found that people can make surprisingly accurate impressions of others based on minimal information. Additional exposure to the applicant would be expected to lead to a more accurate impression, but a previous meta-analysis on the employment interview found evidence for a negative relationship between the length of the interview and validity. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 114 publications
0
11
0
Order By: Relevance
“…Third and relatedly, the focal relationships between interview ratings of personality traits and supervisor ratings of OCBs were only moderate ranging from r = .22 to r = .30. These effect sizes can be categorized as small to medium (Bosco, Aguinis, Singh, Field & Pierce, 2015 ; Cohen, 1992 ) and are slightly lower than uncorrected estimates reported in the latest meta-analyses on the validity of medium to highly structured job interviews ranging from = .25 to = .36 (Huffcutt et al, 2014 ; Thorsteinson, 2018 ). One explanation may be the heterogeneity of the present sample (i.e., interviewees held different jobs and interview questions were not tailored to meet unique demands of these specific jobs).…”
Section: Discussionmentioning
confidence: 85%
“…Third and relatedly, the focal relationships between interview ratings of personality traits and supervisor ratings of OCBs were only moderate ranging from r = .22 to r = .30. These effect sizes can be categorized as small to medium (Bosco, Aguinis, Singh, Field & Pierce, 2015 ; Cohen, 1992 ) and are slightly lower than uncorrected estimates reported in the latest meta-analyses on the validity of medium to highly structured job interviews ranging from = .25 to = .36 (Huffcutt et al, 2014 ; Thorsteinson, 2018 ). One explanation may be the heterogeneity of the present sample (i.e., interviewees held different jobs and interview questions were not tailored to meet unique demands of these specific jobs).…”
Section: Discussionmentioning
confidence: 85%
“…They conclude that "fast and frugal" heuristic judgements made during the rapport-building stage influence interviewer decision-making, raising concerns that bias might originate from initial impressions that are formed during rapport building (Levashina et al, 2014). In line with this, a recent meta-analysis shows that the length of the interview is unrelated to interview validity and reliability providing a different type of evidence that decisions may be made early in the interview process (Thorsteinson, 2018).…”
Section: Anchoring-and-adjustment During the Job Interviewmentioning
confidence: 98%
“…Moreover, investigating underlying cognitive and motivational processes of biased selection decisions against stigmatized vs. non-stigmatized applicants may lead to more generalizable and fundamental insights in hiring discrimination that could be applied to many different marginalized groups in many different contexts (Derous et al, 2013). Finally, literature on interview validity and bias typically consider characteristics of the applicant (e.g., Segrest Purkiss et al, 2006) and the tool (e.g., interview structure; Levashina et al, 2014;Thorsteinson, 2018) Although recently more attention goes to interviewer characteristics (Florea et al, 2019;Frieder et al, 2016), much to our surprise, one of the latest comprehensive reviews that incorporated interviewer characteristics has been published already some time ago (Posthuma et al, 2002). We suggest hiring discrimination literature to incorporate recruiter characteristics and decision-making theory in a more systematic way to increase our understanding of interview bias (Buijsrogge et al, 2016;Florea et al, 2019), and bias in other recruitment tools that are based on human decisionmaking (e.g., resume-screening; Derous & Ryan, 2019).…”
Section: Theoretical Contributionsmentioning
confidence: 99%
“…Although using the radar chart from job analysis to construct interview questions increases interview validity, how to use the chart more effectively needs further investigation. Since longer interviews cost the organization more ( Thorsteinson, 2018 ), interviewers are not likely to spend much time on an interview. Within a limited length of interview time, interviewers can either go broader to cover more aspects of the radar chart from job analysis and get a comprehensive impression of the candidate or select certain aspects to get a deeper understanding of the candidate’s capabilities.…”
Section: Conceptual Background and Hypotheses Developmentmentioning
confidence: 99%
“…However, only indirect effects have been found between job analysis and interview validity ( Conway et al, 1995 ). Extant research mostly uses cognitive test results as the dependent variable to show the validity of using job analysis in structured interviews ( Thorsteinson, 2018 ). These cognitive performance scores are then linked to the candidate’s job performance.…”
Section: Introductionmentioning
confidence: 99%