2022
DOI: 10.1111/ijsa.12382
|View full text |Cite
|
Sign up to set email alerts
|

Effect of job applicant faking and cognitive ability on self‐other agreement and criterion validity of personality assessments

Abstract: This study examined the effect of job applicant faking on the validity of personality assessments, including self‐other correlations, criterion validity, and cognitive ability correlates. By using a large sample, multiple other‐raters, a repeated‐measures design, and a realistic simulated job application, it sought to provide the most precise estimates to date of the effect of the applicant context on self‐other correlations, as well as the influence of cognitive ability on faking. Undergraduate psychology stu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 82 publications
2
7
0
Order By: Relevance
“…The current study sheds light on the use of personality assessments for selection during an economic fallout due to the COVID‐19 pandemic. Consistent with previous research (e.g., Mueller‐Hanson et al, 2003; Wood et al, 2022), we found that scores on conscientiousness were higher in a job applicant scenario. Although we already know that most individuals will fake to some extent in selection contexts under normal circumstances, we were surprised to find that faking did not differ based on COVID‐19 prevalence.…”
Section: Discussionsupporting
confidence: 92%
See 1 more Smart Citation
“…The current study sheds light on the use of personality assessments for selection during an economic fallout due to the COVID‐19 pandemic. Consistent with previous research (e.g., Mueller‐Hanson et al, 2003; Wood et al, 2022), we found that scores on conscientiousness were higher in a job applicant scenario. Although we already know that most individuals will fake to some extent in selection contexts under normal circumstances, we were surprised to find that faking did not differ based on COVID‐19 prevalence.…”
Section: Discussionsupporting
confidence: 92%
“…The COVID-19 pandemic has presented organizations with several uncertainties, as there has not been an event of this nature and magnitude (over 700 million cases and almost 7 million deaths) since the with previous research (e.g., Mueller-Hanson et al, 2003;Wood et al, 2022), we found that scores on conscientiousness were higher in a job applicant scenario. Although we already know that most individuals will fake to some extent in selection contexts under normal circumstances, we were surprised to find that faking did not differ based on COVID-19 prevalence.…”
Section: Discussionsupporting
confidence: 65%
“…First, faking research has revealed strong treatment effects (e.g., McDaniel et al, 2009;Ro ¨hner et al, 2011), which cause large differences in standard deviations between measurement occasions, thus increasing reliability (see Gollwitzer et al, 2014). Therefore, difference scores in faking research (that demonstrate strong treatment effects) could be anticipated to be reliable, and their frequent and successful application in faking research attests to this (e.g., Alliger & Dwight, 2000;Ro ¨hner et al, 2011;Viswesvaran & Ones, 1999;Wood et al, 2022). Second, based on Trafimow's (2015) results, several aspects of the specific research condition (i.e., faking) point to the fact that difference scores should not be unreliable here.…”
Section: Analytical Approachmentioning
confidence: 99%
“…An alternative traditional approach involves the use of difference scores (e.g., Ferrando & Anguiano-Carrasco, 2011 ; Röhner & Schütz, 2020 ). This approach is usually implemented to study faking experimentally and usually focuses on differences in test scores between faking and nonfaking conditions (e.g., Alliger & Dwight, 2000 ; McDaniel et al, 2009 ; Röhner et al, 2011 ; Viswesvaran & Ones, 1999 ; Wood et al, 2022 ). Although difference scores have been criticized in the past (e.g., Bereiter, 1963 ), recent research has demonstrated that, under certain conditions, they are a reasonably reliable measure 2 (e.g., Gollwitzer et al, 2014 ; Trafimow, 2015 ; Trafimow, 2019 ).…”
Section: Traces Of Faking and Faking Detectionmentioning
confidence: 99%
“…Similar to SJTs, there is significant concern that a PAT as a high-stakes exam may be coachable and that applicants may lean toward characteristics they think will be more desirable to programs. Indeed, research has shown that in a high-stakes environment, applicants may engage in substantial response distortion in order to display characteristics that may be more socially desirable [ 63 , 64 ]. Though response distortion adds noise to the assessment, it has less impact on rank ordering of applicants as applicants with lower scores tend to distort responses more [ 63 ].…”
Section: Personality Assessment Tool (Pat)mentioning
confidence: 99%