Several forensic sciences, especially of the pattern-matching kind, are increasingly seen to lack the scientific foundation needed to justify continuing admission as trial evidence. Indeed, several have been abolished in the recent past. A likely next candidate for elimination is bitemark identification. A number of DNA exonerations have occurred in recent years for individuals convicted based on erroneous bitemark identifications. Intense scientific and legal scrutiny has resulted. An important National Academies review found little scientific support for the field. The Texas Forensic Science Commission recently recommended a moratorium on the admission of bitemark expert testimony. The California Supreme Court has a case before it that could start a national dismantling of forensic odontology. This article describes the (legal) basis for the rise of bitemark identification and the (scientific) basis for its impending fall. The article explains the general logic of forensic identification, the claims of bitemark identification, and reviews relevant empirical research on bitemark identification—highlighting both the lack of research and the lack of support provided by what research does exist. The rise and possible fall of bitemark identification evidence has broader implications—highlighting the weak scientific culture of forensic science and the law's difficulty in evaluating and responding to unreliable and unscientific evidence.
This paper argues that judges assessing the scientific validity and the legal admissibility of forensic science techniques ought to privilege testing over explanation. Their evaluation of reliability should be more concerned with whether the technique has been adequately validated by appropriate empirical testing than with whether the expert can offer an adequate description of the methods she uses, or satisfactorily explain her methodology or the theory from which her claims derive. This paper explores these issues within two specific contexts: latent fingerprint examination and the use of breath tests for the detection of alcohol. Especially in the forensic science arena, I suggest courts have often been seduced by superficially plausible explanations and descriptions of a technique or method, and permitted these to serve as a substitute for empirical testing. Thinking through these two examples illustrates both why evaluating the extent of testing should be the most important method by which courts assess reliability, and why, when other forms of explanatory evidence are readily available, we may nonetheless elect to make use of them. This paper suggests that these descriptions and explanations may at times usefully supplement evidence of testing, but should not generally be substituted for it. Finally, this paper embraces a kind of evidentiary pragmatism, in which the quantum of evidence required to establish legal reliability is determined not in the abstract, but in relation to the evidence that is, or ought to be, available as a result of reasonable research and investigation.
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.