2006
DOI: 10.1002/bsl.730
|View full text |Cite
|
Sign up to set email alerts
|

Do tests of malingering concur? Concordance among malingering measures

Abstract: Malingering test accuracy is increasingly a major issue in psychology and law. Integrating results across measures might offset limitations of a single test, but the practical benefits of using several tests depend on the extent to which they misclassify the same individuals. Data from 66 evaluatees were used to assess the degree of overlap and consistency of classification among several commonly used malingering instruments. Although correlative data indicated that measures were highly redundant even across s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
6
0
1

Year Published

2008
2008
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 33 publications
1
6
0
1
Order By: Relevance
“…Unfortunately, because they only included individuals who evidenced some signs suggestive of malingering, their methodology did not permit assessment of classification accuracy of individual measures or, more importantly, the extent to which the use of multiple measures could improve upon the classification accuracy of any one measure used in isolation. Farkas, Rosenfeld, Robbins, and van Gorp (2006) also analyzed associations across a series of malingering measures and scales, using measures that were widely studied and frequently utilized in clinical practice. They studied a sample of civil litigants referred for psychological evaluation, comparing performance across several commonly-used measures including the Test of Memory Malingering (TOMM; Tombaugh, 1996), the Validity Indicator Profile (VIP; Frederick, 1997), the Minnesota Multiphasic Personality Inventory (MMPI-2, Butcher et al, 1989), the Millon Clinical Multiaxial Inventory (MCMI-III, Millon et al, 1997, and the FifteenItem Test (FIT;Rey, 1964).…”
Section: Please Scroll Down For Articlementioning
confidence: 99%
“…Unfortunately, because they only included individuals who evidenced some signs suggestive of malingering, their methodology did not permit assessment of classification accuracy of individual measures or, more importantly, the extent to which the use of multiple measures could improve upon the classification accuracy of any one measure used in isolation. Farkas, Rosenfeld, Robbins, and van Gorp (2006) also analyzed associations across a series of malingering measures and scales, using measures that were widely studied and frequently utilized in clinical practice. They studied a sample of civil litigants referred for psychological evaluation, comparing performance across several commonly-used measures including the Test of Memory Malingering (TOMM; Tombaugh, 1996), the Validity Indicator Profile (VIP; Frederick, 1997), the Minnesota Multiphasic Personality Inventory (MMPI-2, Butcher et al, 1989), the Millon Clinical Multiaxial Inventory (MCMI-III, Millon et al, 1997, and the FifteenItem Test (FIT;Rey, 1964).…”
Section: Please Scroll Down For Articlementioning
confidence: 99%
“…Although several studies of feigned PTSD have relied on measures that detect feigned psychiatric symptoms, some researchers have also highlighted the need to consider exaggerated cognitive deficits (e.g., impaired memory, concentration deficits; Vasterling & Kleiner, 2005). For example, researchers have studied the utility of the Test of Memory Malingering (TOMM; Tombaugh, 1996) and Validity Indicator Profile (VIP; Frederick, 1997) in civil forensic evaluations (e.g., Farkas, Rosenfeld, Robbins, & van Gorp, 2006), where claims of PTSD are common. These and other cognitive effort measures may have utility in the detection of feigned symptoms that are not adequately targeted by scales such as the M-FAST or Structured Interview of Reported Symptoms-2 (SIRS-2; Rogers et al, 2010).…”
mentioning
confidence: 99%
“…Simulation designs are limited by the extent to which participants are unable or unwilling to follow instructions to feign symptoms, as well as questionable generalizability to populations with significant incentive to feign symptoms. Likewise, bootstrapping techniques have numerous limitations, including a lack of empirical guidelines for determining how to combine multiple criterion measures, absence of a "gold standard" measure that simultaneously assesses feigned cognitive deficits and feigned psychiatric symptoms, and the erroneous assumption that errors on feigning measures (e.g., a criterion measure and an experimental measure) are unrelated (Farkas et al, 2006). Finally, because bootstrapping designs typically employ rigid cut-offs for criterion measures or remove indeterminate cases, interpretation of test results may be limited to clear-cut cases (Frederick, 2000), resulting in an overestimation of the accuracy of the measure being evaluated.…”
mentioning
confidence: 99%