Valid self-report assessment of psychopathology relies on accurate and credible responses to test questions. There are some individuals who, in certain assessment contexts, cannot or choose not to answer in a manner typically representative of their traits or symptoms. This is referred to, most broadly, as test response bias. In this investigation, we explore the effect of response bias on the Personality Inventory for DSM-5 (PID-5; Krueger, Derringer, Markon, Watson, & Skodol, 2013 ), a self-report instrument designed to assess the pathological personality traits used to inform diagnosis of the personality disorders in Section III of DSM-5. A set of Minnesota Multiphasic Personality Inventory Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008 / 2011 ) validity scales, which are used to assess and identify response bias, were employed to identify individuals who engaged in either noncredible overreporting (OR) or underreporting (UR), or who were deemed to be reporting or responding to the items in a "credible" manner-credible responding (CR). A total of 2,022 research participants (1,587 students, 435 psychiatric patients) completed the MMPI-2-RF and PID-5; following protocol screening, these participants were classified into OR, UR, or CR response groups based on MMPI-2-RF validity scale scores. Groups of students and patients in the OR group scored significantly higher on the PID-5 than those students and patients in the CR group, whereas those in the UR group scored significantly lower than those in the CR group. Although future research is needed to explore the effects of response bias on the PID-5, results from this investigation provide initial evidence suggesting that response bias influences scale elevations on this instrument.
The triarchic model characterizes psychopathy in terms of three distinct dispositional constructs of boldness, meanness, and disinhibition. The model can be operationalized through scales designed specifically to index these domains or by using items from other inventories that provide coverage of related constructs. The present study sought to develop and validate scales for assessing the triarchic model domains using items from the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). A consensus rating approach was used to identify items relevant to each triarchic domain, and following psychometric refinement, the resulting MMPI-2-RF-based triarchic scales were evaluated for convergent and discriminant validity in relation to multiple psychopathy-relevant criterion variables in offender and nonoffender samples. Expected convergent and discriminant associations were evident very clearly for the Boldness and Disinhibition scales and somewhat less clearly for the Meanness scale. Moreover, hierarchical regression analyses indicated that all MMPI-2-RF triarchic scales incremented standard MMPI-2-RF scale scores in predicting extant triarchic model scale scores. The widespread use of MMPI-2-RF in clinical and forensic settings provides avenues for both clinical and research applications in contexts where traditional psychopathy measures are less likely to be administered.
The current investigation examined the utility of the overreporting validity scales of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) in detecting noncredible reporting of symptoms of posttraumatic stress disorder (PTSD) in a sample of disability-seeking veterans. We also examined the effect of mental health knowledge on the utility of these scales by investigating the extent to which these scales differentiate between veterans with PTSD and individuals with mental health training who were asked to feign symptoms of PTSD on the test. Group differences on validity scale scores indicated that these scales were associated with large effect sizes for differentiating veterans who overreported from those with PTSD and for differentiating between mental health professionals and veterans with PTSD. Implications of these results in terms of clinical practice are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.