2011
DOI: 10.1111/j.1468-2389.2011.00539.x
|View full text |Cite
|
Sign up to set email alerts
|

Do Applicants and Incumbents Respond to Personality Items Similarly? A Comparison of Dominance and Ideal Point Response Models

Abstract: This study used an ideal point response model to examine the extent to which applicants and incumbents differ when responding to personality items. It was hypothesized that applicants' responses would exhibit less folding at high trait levels than incumbents' responses. We used sample data from applicants (N ¼ 1,509) and incumbents (N ¼ 1,568) who completed the 16 Personality Questionnaire Select. Differential item (DIF) and test functioning (DTF) analyses were conducted using the generalized graded unfolding … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(21 citation statements)
references
References 39 publications
0
21
0
Order By: Relevance
“…Nevertheless, the results of some studies suggest that the present findings are not specific to the NEO PI‐R. For example, O'Brien and LaHuis () found that applicants showed larger means and smaller SD than incumbents in two large samples ( n = 1,509 and n = 1,568) in a study using the 16PF.…”
Section: Discussionmentioning
confidence: 52%
“…Nevertheless, the results of some studies suggest that the present findings are not specific to the NEO PI‐R. For example, O'Brien and LaHuis () found that applicants showed larger means and smaller SD than incumbents in two large samples ( n = 1,509 and n = 1,568) in a study using the 16PF.…”
Section: Discussionmentioning
confidence: 52%
“…In sum, the results showed that the mean scores as a possible reflection of self‐enhancement seems to work somewhat differently for personality constructs (i.e., PQ and SOC) and health constructs (i.e., GHQ and IES‐R). Seemingly, potential self‐enhancement is stronger on personality constructs, which may or may not have consequences for the selection decision and subsequently the predictive validity (Lievens et al, ; O'Brien & LaHuis, ; Schmitt & Oswald, ). However, from a personnel selection point of view, the result of this study suggest that the norms used for selection (e.g., cut‐off scores) should be adapted for the population, as the applicants' scores tend to be overestimations (Barrick & Mount, ).…”
Section: Discussionmentioning
confidence: 99%
“…Yet, research regarding the degree of self‐enhancement, and its effect on selection decisions, and later performance, has been inconclusive. While some studies report substantial amounts of self‐enhancement among job applicants (e.g., Lievens, Klehe, & Libbrecht, ; Stark, Chernyshenko, Chan, Lee, & Drasgow, ), other studies imply that the degree of self‐enhancement is limited (e.g., Ellingson, Sackett, & Connelly, ; O'Brian & LaHuis, ; Smith, Hanges, & Dickson, ); and that the effect of self‐enhancement on the predictive validity is negligible or even nullified (e.g., Barrick & Mount, ; Ones et al, ; Schmitt & Oswald, ). The effect of self‐enhancement seems to be especially weak in screening‐out situations, that is, when the selection system aims at rejecting the less suitable in the population (Sackett & Lievens, ).…”
Section: Introductionmentioning
confidence: 99%
“…Regarding modeling and detection of AFB, we found approaches that take the complexity of response processes on an item-level into account. With IRT and SEM models, researchers have managed to give insights into individual response strategies, showing that faking varies both quantitatively and qualitatively among respondents (Ziegler et al, 2015), and that it depends on item content (O'Brien & LaHuis, 2011). Thus, we suggest that the combined use of qualitative and quantitative modeling techniques does suit the current understanding of AFB best, and should therefore be employed more often in future research.…”
Section: The (Lacking) Usage Of Theory In Practical Researchmentioning
confidence: 94%
“…Studies using item response theory (IRT) models (O'Brien & LaHuis, 2011;Robie, Zickar, & Schmit, 2001;Scherbaum, Sabet, Kern, & Agnello, 2013;Zickar et al, 2004) and/or structural equation modeling (SEM) techniques (Honkaniemi, Tolvanen, & Feldt, 2011 2012; Ziegler & Buehner, 2009;Ziegler et al, 2015) have found that faking behavior differs between tests, items and individuals, and is hard to disentangle. Additionally, examining response latencies has provided insights into response processes (Holden, Kroner, Fekken, & Popham, 1992;Holden & Lambert, 2015;Komar, Komar, Robie, & Taggar, 2010).…”
Section: How Can Faking Be Detected? In Quest Of a Faking Fingerprintmentioning
confidence: 99%