The reaction time (RT)-based Concealed Information Test (CIT) allows for the detection of concealed knowledge (e.g., one's true identity) when the questions are presented randomly (multiple-probe protocol), but its performance is much weaker when questions are presented in blocks (e.g., first question about surname, then about birthday; single-probe protocol). The latter test protocol, however, is the preferred and sometimes even the only feasible interviewing method in real-life. In a first, preregistered, experiment (n = 363), we show that the validity of the single-probe protocol version can be substantially improved by including familiarity-related fillers: stimuli related to either familiarity (e.g., the word "FAMILIAR,") or unfamiliarity (e.g., the word "UNFAMILIAR"). We replicated these findings in a second, preregistered, experiment (n = 237), where we further found that the use of familiarityrelated fillers even improved the classic multiple-probe protocol. We recommend the use of familiarity-related filler trials for the RT-based CIT.
SummaryThe Response Time‐Based Concealed Information Test (RT‐CIT) can reveal when a person recognizes a relevant (probe) item among other, irrelevant items, based on comparatively slow responses to the probe item. For example, if a person is concealing his or her true identity, one can use the suspected identity details as probes, and other, random details as irrelevants. However, in our study, we show that even when participants are merely informed about such probes (i.e., the relevant identity details) before performing the RT‐CIT, their responses will also be slower to these details. Hence, it is more difficult to distinguish such innocent but pre‐informed persons from actually guilty persons. At the same time, we introduce a CIT version with familiarity‐related inducer stimuli, but with no targets, that elicits probe‐minus‐irrelevant RT differences only among guilty participants but not among informed innocent participants. Implications for the theory and the application of CITs are discussed.
27In recent years, numerous studies were published on the reaction time (RT)-based Concealed 28 Information Test (CIT). However, an important limitation of the CIT is the reliance on the 29 recognition of the probe item, and therefore the limited applicability when an innocent person 30 is aware of this item. In the present paper, we introduce an RT-based CIT that is based on 31 item-category associations: the Association-based Concealed Information Test (A-CIT). 32Using the participants' given names as probe items and self-referring "inducer" items (e.g., 33"MINE" or "ME") that establish an association between ownership and responses choices, in 34 Experiment 1 (within-subject design; n = 27), this method differentiated with high accuracy 35 between guilty and innocent conditions. Experiment 2 (n = 25) replicated Experiment 1, 36 except that the participants were informed of the probe item in the innocent condition -37 nonetheless, the accuracy rate remained high. Implications and future possibilities are 38 discussed. 39 40
Binary classification has numerous applications. For one, lie detection methods typically aim to classify each tested person either as "liar" or as "truthteller" based on the given test results. To infer practical implications, as well as to compare different methods, it is essential to assess the diagnostic efficiency, such as demonstrating the number of correctly classified persons. However, this is not always straightforward. In Concealed Information Tests (CITs), the key predictor value (probe-irrelevant difference) for "truthtellers" is always similar (zero on average), and "liars" are always distinguished by a larger value (i.e., a larger number resulting from the CIT test, as compared to the zero baseline). Thereby, in general, the larger predictor values a given CIT method obtains for "liars" on average, the better this method is assumed to be. This has indeed been assumed in countless studies, and therefore, when comparing the classification efficiencies of two different designs, the mean difference of "liar" predictor values in the two designs were simply compared to each other (hence not collecting "truthteller" data to spare resources). We show, based on the metadata of 12 different experimental designs collected in response time-based CIT studies, that differences in dispersion (i.e., variance in the data, e.g. the extent of random deviations from the zero average in case of "truthtellers") can substantially influence classification efficiency-to the point that, in extreme cases, one design may even be superior in classification despite having a larger mean "liar" predictor value. However, we also introduce a computer simulation procedure to estimate classification efficiency in the absence of "truthteller" data, and validate this procedure via a meta-analysis comparing outcomes based on empirical data versus simulated data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.