Binary classification has numerous applications. For one, lie detection methods typically aim to classify each tested person either as "liar" or as "truthteller" based on the given test results. To infer practical implications, as well as to compare different methods, it is essential to assess the diagnostic efficiency, such as demonstrating the number of correctly classified persons. However, this is not always straightforward. In Concealed Information Tests (CITs), the key predictor value (probe-irrelevant difference) for "truthtellers" is always similar (zero on average), and "liars" are always distinguished by a larger value (i.e., a larger number resulting from the CIT test, as compared to the zero baseline). Thereby, in general, the larger predictor values a given CIT method obtains for "liars" on average, the better this method is assumed to be. This has indeed been assumed in countless studies, and therefore, when comparing the classification efficiencies of two different designs, the mean difference of "liar" predictor values in the two designs were simply compared to each other (hence not collecting "truthteller" data to spare resources). We show, based on the metadata of 12 different experimental designs collected in response time-based CIT studies, that differences in dispersion (i.e., variance in the data, e.g. the extent of random deviations from the zero average in case of "truthtellers") can substantially influence classification efficiency-to the point that, in extreme cases, one design may even be superior in classification despite having a larger mean "liar" predictor value. However, we also introduce a computer simulation procedure to estimate classification efficiency in the absence of "truthteller" data, and validate this procedure via a meta-analysis comparing outcomes based on empirical data versus simulated data.