1996
DOI: 10.1177/014662169602000107
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Faking on a Personality Instrument Using Appropriateness Measurement

Abstract: Research has demonstrated that people can and often do consciously manipulate scores on personality tests. Test constructors have responded by using social desirability and lying scales in order to identify dishonest respondents. Unfortunately, these approaches have had limited success. This study evaluated the use of appropriateness measurement for identifying dishonest respondents. A dataset was analyzed in which respondents were instructed either to answer honestly or to fake good. The item response theory … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
90
1
3

Year Published

2002
2002
2016
2016

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 122 publications
(100 citation statements)
references
References 34 publications
6
90
1
3
Order By: Relevance
“…A possible way out would be to use an appropriate reparameterization of the replacement distribution on the basis of, for example, the optimal IRT approach. In this particular reparameterization, faking could be modeled as a change in the trait level of the individual that gives rise to the fake responses via the theta-shift parameterization (Zickar and Drasgow 1996). Alternatively, we might assume that while the trait levels of the individuals remain invariant, the item parameters can vary according to the differential effect of faking (Ferrando and Anguiano-Carrasco 2013).…”
Section: Limitations and Directions For Future Studymentioning
confidence: 99%
“…A possible way out would be to use an appropriate reparameterization of the replacement distribution on the basis of, for example, the optimal IRT approach. In this particular reparameterization, faking could be modeled as a change in the trait level of the individual that gives rise to the fake responses via the theta-shift parameterization (Zickar and Drasgow 1996). Alternatively, we might assume that while the trait levels of the individuals remain invariant, the item parameters can vary according to the differential effect of faking (Ferrando and Anguiano-Carrasco 2013).…”
Section: Limitations and Directions For Future Studymentioning
confidence: 99%
“…Many researchers have proposed that item response theory (IRT) models can be used to flag potential distorters as well as conduct the analyses at the item level (e.g., Zickar and Drasgow, 1996;Zickar and Robie, 1999). However, an effect known as the item order effect, whereby the answer to a given item depends on the answers to preceding items, is a reason for cautioning against the use of IRT for assessing the effects of response distortion in noncognitive measures.…”
Section: Strategies To Reduce Response Distortionmentioning
confidence: 99%
“…A custom Turbo Pascal program generated numerous individual "simulees" with various levels of true Work Orientation, and various tendencies to fake on questions known to be "fakable" from research. 162 Results of Zickar's research showed that, even where faking does not alter the observed validity correlation, it causes local distortions in the bivariate distribution between test scores and performance, 156 in effect altering the rank order of applicants for hiring decisions. Zickar's work anticipates the net effects of applicant faking on personality scales.…”
Section: Groups and Teams At Workmentioning
confidence: 99%