Forensic handwriting examination involves the comparison of writing samples by forensic document examiners (FDEs) to determine whether or not they were written by the same person. Here we report the results of a large-scale study conducted to assess the accuracy and reliability of handwriting comparison conclusions. Eighty-six practicing FDEs each conducted up to 100 handwriting comparisons, resulting in 7,196 conclusions on 180 distinct comparison sets, using a five-level conclusion scale. Erroneous “written by” conclusions (false positives) were reached in 3.1% of the nonmated comparisons, while 1.1% of the mated comparisons yielded erroneous “not written by” conclusions (false negatives). False positive rates were markedly higher for nonmated samples written by twins (8.7%) compared to nontwins (2.5%). Notable associations between training and performance were observed: FDEs with less than 2 y of formal training generally had higher error rates, but they also had higher true positive and true negative rates because they tended to provide more definitive conclusions; FDEs with at least 2 y of formal training were less likely to make definitive conclusions, but those definitive conclusions they made were more likely to be correct (higher positive predictive and negative predictive values). We did not observe any association between writing style (cursive vs. printing) and rates of errors or incorrect conclusions. This report also provides details on the repeatability and reproducibility of conclusions, and reports how conclusions are affected by the quantity of writing and the similarity of content.
In the past, pattern disciplines within forensic science have periodically faced criticism due to their subjective and qualitative nature and the perceived absence of research evaluating and supporting the foundations of their practices. Recently, however, forensic scientists and researchers in the field of pattern evidence analysis have developed and published approaches that are more quantitative, objective, and data driven. This effort includes automation, algorithms, and measurement sciences, with the end goal of enabling conclusions to be informed by quantitative models. Before employing these tools, forensic evidence must be digitized in a way that adequately balances high‐quality detail and content capture with minimal background noise imparted by the selected technique. While the current work describes the process of optimizing a method to digitize physical documentary evidence for use in semi‐automated trash mark examinations, it could be applied to assist other disciplines where the digitization of physical items of evidence is prevalent. For trash mark examinations specifically, it was found that high‐resolution photography provided optimal digital versions of evidentiary items when compared to high‐resolution scanning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.