2023
DOI: 10.1080/2330443x.2023.2216748
|View full text |Cite
|
Sign up to set email alerts
|

Shining a Light on Forensic Black-Box Studies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 51 publications
0
9
0
Order By: Relevance
“…We limited our search for a study into which to inject our intervention of a lay control group to those employing an open‐set as opposed to closed‐set design because of the former's centrality to demonstrating validity, and the substantial criticism leveled at the latter [1, 7, 9]. But this posed issues because the vast majority of such studies—despite the existence of a database maintained by NIST for the express purpose of sharing images of comparison sets from studies for research purposes [61]—have not released representative (much less complete) images of their sample sets to the public (an unfortunate example of the larger deficiencies in data transparency common to such research efforts) [23]. This left us with only two options in terms of studies that allowed for comparisons to the performance of professional examiners by releasing full sets of images of their comparison items [53, 62].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We limited our search for a study into which to inject our intervention of a lay control group to those employing an open‐set as opposed to closed‐set design because of the former's centrality to demonstrating validity, and the substantial criticism leveled at the latter [1, 7, 9]. But this posed issues because the vast majority of such studies—despite the existence of a database maintained by NIST for the express purpose of sharing images of comparison sets from studies for research purposes [61]—have not released representative (much less complete) images of their sample sets to the public (an unfortunate example of the larger deficiencies in data transparency common to such research efforts) [23]. This left us with only two options in terms of studies that allowed for comparisons to the performance of professional examiners by releasing full sets of images of their comparison items [53, 62].…”
Section: Methodsmentioning
confidence: 99%
“…Second, even if the samples used in a study are sufficiently difficult, much more is required before we can have faith in the validity of a forensic method. As we have previously emphasized, accurate estimates of discipline error require attention to a host of other factors such as test-taking bias, whether participant pools are adequately representative of professional examiners, and whether statistically significant data are missing due to attrition rates or unit nonresponse [20,23,26]. Accordingly, studies that utilize lay control groups to establish minimal test difficulty meet one of the many criteria necessary before reported error rates can confidently be used to reflect the performance of firearm and toolmark comparison as a method, but until the field forthrightly grapples with the broad swath of other criticisms leveled at its validation studies, that showing alone should not satisfy the scientific community or the courts.…”
Section: Limitati On S and Future Re S E Archmentioning
confidence: 99%
See 1 more Smart Citation
“…Whatever judges decide regarding the validity/admissibility of firearms examination methods-and some have found them wanting and excluded testimony regarding the source of fired bullets and cartridge cases outright [30][31][32]-they have precious little (if any) means at their disposal to assess the equally important question of whether a given examiner is qualified to offer expert testimony: none of training, experience, or accreditation appear to predict accuracy and empirical measures are lacking at the level of individual examiners. Moreover, the rates at which this will present a problem and imperil the liberty of defendants-at which poor performers will appear as potential witnesses-may well exceed the point estimates in this commentary: the authors of Monson et al may believe that "[t]here is no empirical basis for an assumption of superior performance by those who opted for participation" in their sample of convenience [1], but other scholars disagree [21]. Thus, as many commentators have already argued [2,33], widespread, rigorous, blind proficiency tests are sorely needed to fill the gap.…”
Section: 8%mentioning
confidence: 72%
“…But the data discussed in this commentary underscore that any empirical measures of any particular witness's accuracy must be obtained at the individual-examiner level rather than assumed from assessments of method performance overall. Obvious from Table 3, judges cannot merely utilize, when assessing qualifications as opposed to method validity, the accuracy rates reported in studies of firearms examination (even accounting for confidence intervals) because they do not bound the performance of any given witness: the accuracy rates of to response data for other scholars to assess [1,21]; and (2) judges cannot rely on existing proficiency testing data because these tests are almost all declared and not difficult enough to provide meaningful assessments of accuracy on casework samples [19,22,23].…”
Section: 8%mentioning
confidence: 99%