2019
DOI: 10.1111/ijsa.12279
|View full text |Cite
|
Sign up to set email alerts
|

Using a supervised machine learning algorithm for detecting faking good in a personality self‐report

Abstract: We developed a supervised machine learning classifier to identify faking good by analyzing item response patterns of a Big Five personality self-report. We used a between-subject design, dividing participants (N = 548) into two groups and manipulated their faking behavior via instructions given prior to administering the selfreport. We implemented a simple classifier based on the Lie scale's cutoff score and several machine learning models fitted either to the personality scale scores or to the items response … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
55
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(58 citation statements)
references
References 37 publications
3
55
0
Order By: Relevance
“…The results of Study 1 were highly encouraging, exceeding the accuracy of previously introduced Likert-based approaches (Calanna et al, 2020;Kuncel & Borneman, 2007).…”
Section: Studymentioning
confidence: 82%
See 3 more Smart Citations
“…The results of Study 1 were highly encouraging, exceeding the accuracy of previously introduced Likert-based approaches (Calanna et al, 2020;Kuncel & Borneman, 2007).…”
Section: Studymentioning
confidence: 82%
“…We note that none of these studies employed cross-validation, meaning these reported results may in fact have overestimated out-of-sample accuracy. Calanna et al (2020) utilized a method conceptually closest to ours, achieving a cross-validated accuracy of 76% using response patterns indicative of faking on Likert-type scales as predictors in a logistic regression. Our results for both studies appeared to meet and exceed both this and indeed the vast majority of previously published methods of faking detection in terms of accuracy, and to our knowledge empirically demonstrate for the first time how faking may be detected on FC measures.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Whereas the idea to investigate response patterns in order to identify faking goes back to Zickar et al (2004), Calanna et al (2020) recently showed that the use of response patterns (i.e., all of a participant's responses; e.g., all answers to all items on a self-report) outperforms the use of scores (e.g., the test score from a self-report) in faking detection. Apparently, there is relevant information in response patterns that is not mirrored by scores (e.g., Kuncel & Bornemann, 2007;Kuncel & Tellegen, 2009).…”
Section: Large Quantities Of Datamentioning
confidence: 99%