Within the scope of judicial decisions, approaches to distinguish between true and fabricated statements have been of particular importance since ancient times. Although methods focusing on "prototypical" deceptive behavior (e.g., psychophysiological phenomena, nonverbal cues) have largely been rejected with regard to validity, content-based techniques constitute a promising approach and are well established within the applied forensic context. The basic idea of this approach is that experience-based and nonexperiencebased statements differ in their content-related quality. In order to test the validity of the most prominent content-based techniques, Criteria-Based Content Analysis (CBCA) and Reality Monitoring (RM), we conducted a comprehensive meta-analysis on English-and Germanlanguage studies. Based on a variety of decision criteria, 56 studies were included revealing an overall effect size of g = 1.03 (95% confidence interval [0.78, 1.27], Q = 420.06, p < .001, I² = 92.48%, N = 3429). There was no significant difference in the effectiveness of CBCA and RM. Additionally, we investigated a number of moderator variables such as characteristics of participants, statements, and judgment procedures, as well as general study characteristics.Results showed that the application of all CBCA criteria outperformed any incomplete CBCA criteria set. Furthermore, statement classification based on discriminant functions revealed higher discrimination rates than decisions based on sum scores. Finally, unpublished studies showed higher effect sizes than studies published in peer-reviewed journals. All results are discussed in terms of their significance for future research (e.g., developing standardized decision rules) and practical application (e.g., user training, applying complete criteria set).
Dual-task costs might result from confusions on the task-set level as both tasks are not represented as distinct task-sets, but rather being integrated into a single task-set. This suggests that events in the two tasks are stored and retrieved together as an integrated memory episode. In a series of three experiments, we tested for such integrated task processing and whether it can be modulated by regularities between the stimuli of the two tasks (across-task contingencies) or by sequential regularities within one of the tasks (within-task contingencies). Building on the experimental approach of feature binding in action control, we tested whether the participants in a dual-tasking experiment will show partial-repetition costs: they should be slower when only the stimulus in one of the two tasks is repeated from Trial n − 1 to Trial n than when the stimuli in both tasks repeat. In all three experiments, the participants processed a visual-manual and an auditory-vocal tone-discrimination task which were always presented concurrently. In Experiment 1, we show that retrieval of Trial n − 1 episodes is stable across practice if the stimulus material is drawn randomly. Across-task contingencies (Experiment 2) and sequential regularities within a task (Experiment 3) can compete with n − 1-based retrieval leading to a reduction of partial-repetition costs with practice. Overall the results suggest that participants do not separate the processing of the two tasks, yet, within-task contingencies might reduce integrated task processing.
Free choice tasks are tasks in which two or more equally valid response options per stimulus exist from which participants can choose. In investigations of the putative difference between self-generated and externally triggered actions, they are often contrasted with forced choice tasks, in which only one response option is considered correct. Usually, responses in free choice tasks are slower when compared with forced choice task responses, which may point to a qualitative difference in response selection. It was, however, also suggested that free choice tasks are in fact random generation tasks. Here, we tested the prediction that in this case, randomness of the free choice responses depends on working memory (WM) load. In Experiment 1, participants were provided with varying levels of external WM support in the form of displayed previous choices. In Experiment 2, WM load was induced via a concurrent n-back task. The data generally confirm the prediction: in Experiment 1, WM support improved both randomness and speed of responses. In Experiment 2, randomness decreased and responses slowed down with increasing WM load. These results suggest that free choice tasks have much in common with random generation tasks.
Response times (RTs) for free choice tasks are usually longer than those for forced choice tasks. We examined the cause for this difference in a study with intermixed free and forced choice trials, and adopted the rationale of sequential sampling frameworks to test two alternative accounts: Longer RTs in free choices are caused (1) by lower rates of information accumulation, or (2) by additional cognitive processes that delay the start of information accumulation. In three experiments, we made these accounts empirically discriminable by manipulating decision thresholds via the frequency of catch trials (Exp. 1) or via inducing time pressure (Exp. 2 and 3). Our results supported the second account, suggesting a temporal delay of information accumulation in free choice tasks, while the accumulation rate remains comparable. We propose that response choice in both tasks relies on information accumulation towards a specific goal. While in forced choice tasks, this goal is externally determined by the stimulus, in free choice tasks, it needs to be generated internally, which requires additional time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.