2019
DOI: 10.1111/peps.12353
|View full text |Cite
|
Sign up to set email alerts
|

Examining the item response process to personality measures in high‐stakes situations: Issues of measurement validity and predictive validity

Abstract: We conducted two experimental studies with between-subjects and within-subjects designs to investigate the item response process for personality measures administered in high-versus low-stakes situations. Apart from assessing measurement validity of the item response process, we examined predictive validity; that is, whether or not different response models entail differential selection outcomes. We found that ideal point response models fit slightly better than dominance response models across high-versus low… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 69 publications
(114 reference statements)
0
11
0
Order By: Relevance
“…Considering GGUM's mathematical complexity for estimation difficulties, some studies related to detect faking used other methods, for example, techniques based on reaction times, and scored invalidity scales (Sellbom and Bagby, 2010;Monaro et al, 2018;Roma et al, 2018;Mazza et al, 2019), generally obtained superior accurate outcomes. Finally, practically speaking, the use of ideal point models seems not to result in any improvement for predictive validity, if comparing with dominance models (Zhang et al, 2019). Hence there are still some issues with ideal point models when used for modeling faking response data.…”
Section: Discussionmentioning
confidence: 98%
“…Considering GGUM's mathematical complexity for estimation difficulties, some studies related to detect faking used other methods, for example, techniques based on reaction times, and scored invalidity scales (Sellbom and Bagby, 2010;Monaro et al, 2018;Roma et al, 2018;Mazza et al, 2019), generally obtained superior accurate outcomes. Finally, practically speaking, the use of ideal point models seems not to result in any improvement for predictive validity, if comparing with dominance models (Zhang et al, 2019). Hence there are still some issues with ideal point models when used for modeling faking response data.…”
Section: Discussionmentioning
confidence: 98%
“…More recent studies utilized IRT‐based approaches to validate forced‐choice tests (P. Lee et al, 2018; Morillo et al, 2019; Ng et al, 2021; Walton et al, 2019; Wetzel & Frick, 2020; Zhang et al, 2020) and to demonstrate the robustness of forced‐choice formats to faking (Usami et al, 2016; Wetzel et al, 2021). However, it should be noted that this claim was based on less inflation of forced‐choice mean scores than those from the Likert‐scale format, and faking was still observed in forced‐choice formats (Pavlov et al, 2019; Wetzel et al, 2021).…”
Section: Research Backgroundsmentioning
confidence: 99%
“…We estimated honest and faking factor scores in a common model to estimate both types of scores on the same scale. Only to test whether the assumption of identical item parameters in both conditions affects the results notably, we estimated one model for each of the two conditions (Zhang et al, 2020), resulting in highly correlated factor loadings in both models (r = .98). All further analyses are based on the joint model.…”
Section: Trait Score Estimationmentioning
confidence: 99%