Like all health care workers, rehabilitation professionals are at a high risk of burnout. There are common mechanisms underlying burnout in the different professional groups investigated. Further research in occupational health in rehabilitation settings is needed to prevent burnout.
Background and Purpose. The use of machine learning (ML) models in the detection of malingering has yielded encouraging results, showing promising accuracy levels. We investigated the possible application of this methodology when trained on behavioral features, such as response time (RT) and time pressure, to identify faking behavior in self-report personality questionnaires. To do so, we reintroduced the article of Roma et al. (2018), which highlighted that RTs and time pressure are useful variables in the detection of faking; we then extended the number of participants and applied an ML analysis. Materials and Methods. The sample was composed of 175 subjects, of whom all were graduates (having completed at least 17 years of instruction), male, and Caucasian. Subjects were randomly assigned to four groups: honest speeded, faking-good speeded, honest unspeeded, and faking-good unspeeded. A software version of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) was administered. Results. Results indicated that ML algorithms reached very high accuracies (around 95%) in detecting malingerers when subjects are instructed to respond under time pressure. The classifiers’ performance was lower when the subjects responded with no time restriction to the MMPI-2-RF items, with accuracies ranging from 75% to 85%. Further analysis demonstrated that T -scores of validity scales are ineffective to detect fakers when participants were not under temporal pressure (accuracies 55–65%), whereas temporal features resulted to be more useful (accuracies 70–75%). By contrast, temporal features and T -scores of validity scales are equally effective in detecting fakers when subjects are under time pressure (accuracies higher than 90%). Discussion. To conclude, results demonstrated that ML techniques are extremely valuable and reach high performance in detecting fakers in self-report personality questionnaires over more the traditional psychometric techniques. Validity scales MMPI-2-RF manual criteria are very poor in identifying under-reported profiles. Moreover, temporal measures are useful tools in distinguishing honest from dishonest responders, especially in a no time pressure condition. Indeed, time pressure brings out malingerers in clearer way than does no time pressure condition.
Background and Purpose: Research on the relationship between response latency (RL) and faking in self-administered testing scenarios have generated contradictory findings. We explored this relationship further, aiming to add further insight into the reliability of self-report measures. We compared RLs and T-scores on the MMPI-2-RF (validity and restructured clinical [RC] scales) in four experimental groups. Our hypotheses were that: the Fake-Good Speeded group would obtain a different completion time; show higher RLs than the Honesty Speeded Group in the validity scales; show higher T-Scores in the L-r and K-r scales and lower T-scores in the F-r and RC scales; and show higher levels of tension and fatigue. Finally, the impact of the speeded condition in malingering was assessed.Materials and Methods: The sample was comprised of 135 subjects (M = 26.64; SD = 1.88 years old), all of whom were graduates (having completed at least 17 years of instruction), male, and Caucasian. Subjects were randomly assigned to four groups: Honesty Speeded, Fake-Good Speeded, Honesty Un-Speeded, and Fake-Good Un-Speeded. A software version of the MMPI-2-RF and Visual Analog Scale (VAS) were administered. To test the hypotheses, MANOVAs and binomial logistic regressions were run.Results: Significant differences were found between the four groups, and particularly between the Honest and Fake-Good groups in terms of test completion time and the L-r and K-r scales. The speeded condition increased T-scores in the L-r and K-r scales but decreased T-scores in some of the RC scales. The Fake groups also scored higher on the VAS Tension subscale. Completion times for the first and second parts of the MMPI-2-RF and T-scores for the K-r scale seemed to predict malingering.Conclusion: The speeded condition seemed to bring out the malingerers. Limitations include the sample size and gender bias.
the aim of the present study was to explore whether kinematic indicators could improve the detection of subjects demonstrating faking-good behaviour when responding to personality questionnaires. one hundred and twenty volunteers were randomly assigned to one of four experimental groups (honest unspeeded, faking-good unspeeded, honest speeded, and faking-good speeded). Participants were asked to respond to the MMPI-2 underreporting scales (L, K, S) and the PPI-R Virtuous Responding (VR) scale using a computer mouse. The collected data included T-point scores on the L, K, S, and VR scales; response times on these scales; and several temporal and spatial mouse parameters. these data were used to investigate the presence of significant differences between the two manipulated variables (honest vs. faking-good; speeded vs. unspeeded). The results demonstrated that T-scores were significantly higher in the faking-good condition relative to the honest condition; however, fakinggood and honest respondents showed no statistically significant differences between the speeded and unspeeded conditions. Concerning temporal and spatial kinematic parameters, we observed mixed results for different scales and further investigations are required. The most consistent finding, albeit with small observed effects, regards the L scale, in which faking-good respondents took longer to respond to stimuli and outlined wider mouse trajectories to arrive at the given response.One of the main limitations of the use of self-report questionnaires to assess personality is that such tests are vulnerable to faking behavior 1 -that is, the tendency to deliberately distort one's responses in order to fulfil personal goals 2 . In one form of faking, respondents exaggerate or create symptoms to emphasize their psychological suffering and discomfort (faking-bad); in another, respondents present themselves in a particularly favourable fashion, emphasizing their desirable traits and rejecting their undesirable ones (faking-good). Faking behaviour is widespread in many contexts, with alarming estimates of prevalence (e.g., 30-50% in personnel selection 3 and up to 30% in forensic settings 4,5 ). Many studies have focused on faking-bad behaviour and developed tools to facilitate its detection; such tools include the Structured Interview of Reported Symptoms-2 (SIRS-2) 6 , the Structured Inventory of Malingered Symptomatology (SIMS) 7 , and the Inventory of Problems-29 8 . Faking-bad behaviour has received more research attention 9,10 perhaps because its welfare/social costs (in terms of, e.g. insurance compensation) are more easily recognizable; thus, the literature on the subject is not as rich and instruments to identify faking-good behaviour are lacking; for this reason, the present study focused on faking-good behavior, specifically.Analysis of validity scales is one of the most commonly used methods to detect fakers. Validity scales were designed to gather information on the validity and interpretability of self-report questionnaires by exploring the ...
In the context of legal damage evaluations, evaluees may exaggerate or simulate symptoms in an attempt to obtain greater economic compensation. To date, practitioners and researchers have focused on detecting malingering behavior as an exclusively unitary construct. However, we argue that there are two types of inconsistent behavior that speak to possible malingering—accentuating (i.e., exaggerating symptoms that are actually experienced) and simulating (i.e., fabricating symptoms entirely)—each with its own unique attributes; thus, it is necessary to distinguish between them. The aim of the present study was to identify objective indicators to differentiate symptom accentuators from symptom producers and consistent participants. We analyzed the Structured Inventory of Malingered Symptomatology scales and the Minnesota Multiphasic Personality Inventory-2 Restructured Form validity scales of 132 individuals with a diagnosed adjustment disorder with mixed anxiety and depressed mood who had undergone assessment for psychiatric/psychological damage. The results indicated that the SIMS Total Score, Neurologic Impairment and Low Intelligence scales and the MMPI-2-RF Infrequent Responses (F-r) and Response Bias (RBS) scales successfully discriminated among symptom accentuators, symptom producers, and consistent participants. Machine learning analysis was used to identify the most efficient parameter for classifying these three groups, recognizing the SIMS Total Score as the best indicator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.